00:00:00.001 Started by upstream project "autotest-per-patch" build number 126239 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.044 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.062 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.091 Using shallow fetch with depth 1 00:00:00.091 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.091 > git --version # timeout=10 00:00:00.129 > git --version # 'git version 2.39.2' 00:00:00.129 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.172 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.007 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.020 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.033 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.033 > git config core.sparsecheckout # timeout=10 00:00:03.044 > git read-tree -mu HEAD # timeout=10 00:00:03.062 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.083 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.083 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.163 [Pipeline] Start of Pipeline 00:00:03.181 [Pipeline] library 00:00:03.183 Loading library shm_lib@master 00:00:07.150 Library shm_lib@master is cached. Copying from home. 00:00:07.227 [Pipeline] node 00:00:22.286 Still waiting to schedule task 00:00:22.287 Waiting for next available executor on ‘vagrant-vm-host’ 00:01:29.776 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:29.778 [Pipeline] { 00:01:29.793 [Pipeline] catchError 00:01:29.795 [Pipeline] { 00:01:29.818 [Pipeline] wrap 00:01:29.831 [Pipeline] { 00:01:29.840 [Pipeline] stage 00:01:29.843 [Pipeline] { (Prologue) 00:01:29.861 [Pipeline] echo 00:01:29.863 Node: VM-host-SM0 00:01:29.868 [Pipeline] cleanWs 00:01:29.884 [WS-CLEANUP] Deleting project workspace... 00:01:29.884 [WS-CLEANUP] Deferred wipeout is used... 00:01:29.890 [WS-CLEANUP] done 00:01:30.056 [Pipeline] setCustomBuildProperty 00:01:30.147 [Pipeline] httpRequest 00:01:30.174 [Pipeline] echo 00:01:30.176 Sorcerer 10.211.164.101 is alive 00:01:30.186 [Pipeline] httpRequest 00:01:30.191 HttpMethod: GET 00:01:30.192 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:01:30.192 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:01:30.193 Response Code: HTTP/1.1 200 OK 00:01:30.194 Success: Status code 200 is in the accepted range: 200,404 00:01:30.194 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:01:30.339 [Pipeline] sh 00:01:30.621 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:01:30.637 [Pipeline] httpRequest 00:01:30.653 [Pipeline] echo 00:01:30.655 Sorcerer 10.211.164.101 is alive 00:01:30.664 [Pipeline] httpRequest 00:01:30.669 HttpMethod: GET 00:01:30.670 URL: http://10.211.164.101/packages/spdk_c9ef451faea5f7d4b6b2fd6612ef95347576ac19.tar.gz 00:01:30.671 Sending request to url: http://10.211.164.101/packages/spdk_c9ef451faea5f7d4b6b2fd6612ef95347576ac19.tar.gz 00:01:30.672 Response Code: HTTP/1.1 200 OK 00:01:30.673 Success: Status code 200 is in the accepted range: 200,404 00:01:30.673 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_c9ef451faea5f7d4b6b2fd6612ef95347576ac19.tar.gz 00:01:32.845 [Pipeline] sh 00:01:33.126 + tar --no-same-owner -xf spdk_c9ef451faea5f7d4b6b2fd6612ef95347576ac19.tar.gz 00:01:36.418 [Pipeline] sh 00:01:36.699 + git -C spdk log --oneline -n5 00:01:36.699 c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:01:36.699 b26ca8289 event: add enforce_numa app option 00:01:36.699 83c8cffdc env: add enforce_numa environment option 00:01:36.699 804b11b4b env_dpdk: assert that SOCKET_ID_ANY == SPDK_ENV_SOCKET_ID_ANY 00:01:36.699 cdc37ee83 env_dpdk: deprecate spdk_env_opts_init and spdk_env_init 00:01:36.719 [Pipeline] writeFile 00:01:36.738 [Pipeline] sh 00:01:37.015 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:37.027 [Pipeline] sh 00:01:37.304 + cat autorun-spdk.conf 00:01:37.304 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.304 SPDK_TEST_NVMF=1 00:01:37.304 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.304 SPDK_TEST_USDT=1 00:01:37.304 SPDK_TEST_NVMF_MDNS=1 00:01:37.304 SPDK_RUN_UBSAN=1 00:01:37.304 NET_TYPE=virt 00:01:37.304 SPDK_JSONRPC_GO_CLIENT=1 00:01:37.304 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.310 RUN_NIGHTLY=0 00:01:37.314 [Pipeline] } 00:01:37.334 [Pipeline] // stage 00:01:37.352 [Pipeline] stage 00:01:37.354 [Pipeline] { (Run VM) 00:01:37.370 [Pipeline] sh 00:01:37.644 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.645 + echo 'Start stage prepare_nvme.sh' 00:01:37.645 Start stage prepare_nvme.sh 00:01:37.645 + [[ -n 7 ]] 00:01:37.645 + disk_prefix=ex7 00:01:37.645 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:01:37.645 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:01:37.645 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:01:37.645 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.645 ++ SPDK_TEST_NVMF=1 00:01:37.645 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.645 ++ SPDK_TEST_USDT=1 00:01:37.645 ++ SPDK_TEST_NVMF_MDNS=1 00:01:37.645 ++ SPDK_RUN_UBSAN=1 00:01:37.645 ++ NET_TYPE=virt 00:01:37.645 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:37.645 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.645 ++ RUN_NIGHTLY=0 00:01:37.645 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:37.645 + nvme_files=() 00:01:37.645 + declare -A nvme_files 00:01:37.645 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.645 + nvme_files['nvme.img']=5G 00:01:37.645 + nvme_files['nvme-cmb.img']=5G 00:01:37.645 + nvme_files['nvme-multi0.img']=4G 00:01:37.645 + nvme_files['nvme-multi1.img']=4G 00:01:37.645 + nvme_files['nvme-multi2.img']=4G 00:01:37.645 + nvme_files['nvme-openstack.img']=8G 00:01:37.645 + nvme_files['nvme-zns.img']=5G 00:01:37.645 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.645 + (( SPDK_TEST_FTL == 1 )) 00:01:37.645 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.645 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.645 + for nvme in "${!nvme_files[@]}" 00:01:37.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:37.645 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.645 + for nvme in "${!nvme_files[@]}" 00:01:37.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:37.645 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.645 + for nvme in "${!nvme_files[@]}" 00:01:37.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:37.645 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.645 + for nvme in "${!nvme_files[@]}" 00:01:37.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:37.645 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.645 + for nvme in "${!nvme_files[@]}" 00:01:37.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:37.646 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.646 + for nvme in "${!nvme_files[@]}" 00:01:37.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:37.646 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.646 + for nvme in "${!nvme_files[@]}" 00:01:37.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:38.214 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.214 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:38.214 + echo 'End stage prepare_nvme.sh' 00:01:38.214 End stage prepare_nvme.sh 00:01:38.227 [Pipeline] sh 00:01:38.509 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:38.509 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:38.509 00:01:38.509 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:01:38.509 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:01:38.509 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:38.509 HELP=0 00:01:38.509 DRY_RUN=0 00:01:38.509 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:38.509 NVME_DISKS_TYPE=nvme,nvme, 00:01:38.509 NVME_AUTO_CREATE=0 00:01:38.509 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:38.509 NVME_CMB=,, 00:01:38.509 NVME_PMR=,, 00:01:38.509 NVME_ZNS=,, 00:01:38.509 NVME_MS=,, 00:01:38.509 NVME_FDP=,, 00:01:38.509 SPDK_VAGRANT_DISTRO=fedora38 00:01:38.509 SPDK_VAGRANT_VMCPU=10 00:01:38.509 SPDK_VAGRANT_VMRAM=12288 00:01:38.509 SPDK_VAGRANT_PROVIDER=libvirt 00:01:38.509 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:38.509 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:38.509 SPDK_OPENSTACK_NETWORK=0 00:01:38.509 VAGRANT_PACKAGE_BOX=0 00:01:38.509 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:38.509 FORCE_DISTRO=true 00:01:38.509 VAGRANT_BOX_VERSION= 00:01:38.509 EXTRA_VAGRANTFILES= 00:01:38.509 NIC_MODEL=e1000 00:01:38.509 00:01:38.509 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:01:38.509 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:41.793 Bringing machine 'default' up with 'libvirt' provider... 00:01:42.727 ==> default: Creating image (snapshot of base box volume). 00:01:42.986 ==> default: Creating domain with the following settings... 00:01:42.986 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721071808_492592fb48a1a85fcc73 00:01:42.986 ==> default: -- Domain type: kvm 00:01:42.986 ==> default: -- Cpus: 10 00:01:42.986 ==> default: -- Feature: acpi 00:01:42.986 ==> default: -- Feature: apic 00:01:42.986 ==> default: -- Feature: pae 00:01:42.986 ==> default: -- Memory: 12288M 00:01:42.986 ==> default: -- Memory Backing: hugepages: 00:01:42.986 ==> default: -- Management MAC: 00:01:42.986 ==> default: -- Loader: 00:01:42.986 ==> default: -- Nvram: 00:01:42.986 ==> default: -- Base box: spdk/fedora38 00:01:42.986 ==> default: -- Storage pool: default 00:01:42.986 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721071808_492592fb48a1a85fcc73.img (20G) 00:01:42.986 ==> default: -- Volume Cache: default 00:01:42.986 ==> default: -- Kernel: 00:01:42.986 ==> default: -- Initrd: 00:01:42.986 ==> default: -- Graphics Type: vnc 00:01:42.986 ==> default: -- Graphics Port: -1 00:01:42.986 ==> default: -- Graphics IP: 127.0.0.1 00:01:42.986 ==> default: -- Graphics Password: Not defined 00:01:42.986 ==> default: -- Video Type: cirrus 00:01:42.986 ==> default: -- Video VRAM: 9216 00:01:42.986 ==> default: -- Sound Type: 00:01:42.986 ==> default: -- Keymap: en-us 00:01:42.986 ==> default: -- TPM Path: 00:01:42.986 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:42.986 ==> default: -- Command line args: 00:01:42.986 ==> default: -> value=-device, 00:01:42.986 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:42.986 ==> default: -> value=-drive, 00:01:42.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:42.986 ==> default: -> value=-device, 00:01:42.986 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.986 ==> default: -> value=-device, 00:01:42.986 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:42.986 ==> default: -> value=-drive, 00:01:42.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:42.986 ==> default: -> value=-device, 00:01:42.986 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.986 ==> default: -> value=-drive, 00:01:42.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:42.986 ==> default: -> value=-device, 00:01:42.986 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.986 ==> default: -> value=-drive, 00:01:42.986 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:42.986 ==> default: -> value=-device, 00:01:42.986 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:43.244 ==> default: Creating shared folders metadata... 00:01:43.244 ==> default: Starting domain. 00:01:45.144 ==> default: Waiting for domain to get an IP address... 00:02:03.212 ==> default: Waiting for SSH to become available... 00:02:03.212 ==> default: Configuring and enabling network interfaces... 00:02:05.732 default: SSH address: 192.168.121.119:22 00:02:05.732 default: SSH username: vagrant 00:02:05.732 default: SSH auth method: private key 00:02:07.627 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:15.768 ==> default: Mounting SSHFS shared folder... 00:02:17.168 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:17.168 ==> default: Checking Mount.. 00:02:18.100 ==> default: Folder Successfully Mounted! 00:02:18.100 ==> default: Running provisioner: file... 00:02:19.034 default: ~/.gitconfig => .gitconfig 00:02:19.292 00:02:19.292 SUCCESS! 00:02:19.292 00:02:19.292 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:02:19.292 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:19.292 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:02:19.292 00:02:19.302 [Pipeline] } 00:02:19.323 [Pipeline] // stage 00:02:19.334 [Pipeline] dir 00:02:19.335 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:02:19.337 [Pipeline] { 00:02:19.354 [Pipeline] catchError 00:02:19.356 [Pipeline] { 00:02:19.393 [Pipeline] sh 00:02:19.670 + vagrant ssh-config --host vagrant 00:02:19.670 + sed -ne /^Host/,$p 00:02:19.670 + tee ssh_conf 00:02:22.954 Host vagrant 00:02:22.954 HostName 192.168.121.119 00:02:22.954 User vagrant 00:02:22.954 Port 22 00:02:22.954 UserKnownHostsFile /dev/null 00:02:22.954 StrictHostKeyChecking no 00:02:22.954 PasswordAuthentication no 00:02:22.954 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:22.954 IdentitiesOnly yes 00:02:22.954 LogLevel FATAL 00:02:22.954 ForwardAgent yes 00:02:22.954 ForwardX11 yes 00:02:22.954 00:02:22.970 [Pipeline] withEnv 00:02:22.973 [Pipeline] { 00:02:22.992 [Pipeline] sh 00:02:23.271 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:23.271 source /etc/os-release 00:02:23.271 [[ -e /image.version ]] && img=$(< /image.version) 00:02:23.271 # Minimal, systemd-like check. 00:02:23.271 if [[ -e /.dockerenv ]]; then 00:02:23.271 # Clear garbage from the node's name: 00:02:23.271 # agt-er_autotest_547-896 -> autotest_547-896 00:02:23.271 # $HOSTNAME is the actual container id 00:02:23.271 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:23.271 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:23.271 # We can assume this is a mount from a host where container is running, 00:02:23.271 # so fetch its hostname to easily identify the target swarm worker. 00:02:23.271 container="$(< /etc/hostname) ($agent)" 00:02:23.271 else 00:02:23.271 # Fallback 00:02:23.271 container=$agent 00:02:23.271 fi 00:02:23.272 fi 00:02:23.272 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:23.272 00:02:23.282 [Pipeline] } 00:02:23.300 [Pipeline] // withEnv 00:02:23.311 [Pipeline] setCustomBuildProperty 00:02:23.331 [Pipeline] stage 00:02:23.333 [Pipeline] { (Tests) 00:02:23.353 [Pipeline] sh 00:02:23.627 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:23.897 [Pipeline] sh 00:02:24.170 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:24.189 [Pipeline] timeout 00:02:24.189 Timeout set to expire in 40 min 00:02:24.191 [Pipeline] { 00:02:24.210 [Pipeline] sh 00:02:24.521 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:25.087 HEAD is now at c9ef451fa nvme: add spdk_nvme_ctrlr_get_socket_id() 00:02:25.101 [Pipeline] sh 00:02:25.378 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:25.648 [Pipeline] sh 00:02:25.926 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:25.944 [Pipeline] sh 00:02:26.222 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:26.222 ++ readlink -f spdk_repo 00:02:26.222 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:26.222 + [[ -n /home/vagrant/spdk_repo ]] 00:02:26.222 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:26.222 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:26.487 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:26.487 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:26.487 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:26.487 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:26.487 + cd /home/vagrant/spdk_repo 00:02:26.487 + source /etc/os-release 00:02:26.487 ++ NAME='Fedora Linux' 00:02:26.487 ++ VERSION='38 (Cloud Edition)' 00:02:26.487 ++ ID=fedora 00:02:26.487 ++ VERSION_ID=38 00:02:26.487 ++ VERSION_CODENAME= 00:02:26.487 ++ PLATFORM_ID=platform:f38 00:02:26.487 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:26.487 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:26.487 ++ LOGO=fedora-logo-icon 00:02:26.487 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:26.487 ++ HOME_URL=https://fedoraproject.org/ 00:02:26.488 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:26.488 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:26.488 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:26.488 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:26.488 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:26.488 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:26.488 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:26.488 ++ SUPPORT_END=2024-05-14 00:02:26.488 ++ VARIANT='Cloud Edition' 00:02:26.488 ++ VARIANT_ID=cloud 00:02:26.488 + uname -a 00:02:26.488 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:26.488 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:26.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:26.744 Hugepages 00:02:26.744 node hugesize free / total 00:02:26.744 node0 1048576kB 0 / 0 00:02:26.744 node0 2048kB 0 / 0 00:02:26.744 00:02:26.744 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.744 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:26.744 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:27.000 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:27.000 + rm -f /tmp/spdk-ld-path 00:02:27.000 + source autorun-spdk.conf 00:02:27.000 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.000 ++ SPDK_TEST_NVMF=1 00:02:27.000 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:27.000 ++ SPDK_TEST_USDT=1 00:02:27.000 ++ SPDK_TEST_NVMF_MDNS=1 00:02:27.000 ++ SPDK_RUN_UBSAN=1 00:02:27.000 ++ NET_TYPE=virt 00:02:27.000 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:27.000 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.001 ++ RUN_NIGHTLY=0 00:02:27.001 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:27.001 + [[ -n '' ]] 00:02:27.001 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:27.001 + for M in /var/spdk/build-*-manifest.txt 00:02:27.001 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.001 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.001 + for M in /var/spdk/build-*-manifest.txt 00:02:27.001 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.001 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.001 ++ uname 00:02:27.001 + [[ Linux == \L\i\n\u\x ]] 00:02:27.001 + sudo dmesg -T 00:02:27.001 + sudo dmesg --clear 00:02:27.001 + dmesg_pid=5150 00:02:27.001 + [[ Fedora Linux == FreeBSD ]] 00:02:27.001 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.001 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.001 + sudo dmesg -Tw 00:02:27.001 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:27.001 + [[ -x /usr/src/fio-static/fio ]] 00:02:27.001 + export FIO_BIN=/usr/src/fio-static/fio 00:02:27.001 + FIO_BIN=/usr/src/fio-static/fio 00:02:27.001 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:27.001 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:27.001 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:27.001 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.001 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.001 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:27.001 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.001 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.001 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.001 Test configuration: 00:02:27.001 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.001 SPDK_TEST_NVMF=1 00:02:27.001 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:27.001 SPDK_TEST_USDT=1 00:02:27.001 SPDK_TEST_NVMF_MDNS=1 00:02:27.001 SPDK_RUN_UBSAN=1 00:02:27.001 NET_TYPE=virt 00:02:27.001 SPDK_JSONRPC_GO_CLIENT=1 00:02:27.001 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.001 RUN_NIGHTLY=0 19:30:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:27.001 19:30:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:27.001 19:30:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.001 19:30:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.001 19:30:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.001 19:30:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.001 19:30:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.001 19:30:52 -- paths/export.sh@5 -- $ export PATH 00:02:27.001 19:30:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.001 19:30:52 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:27.001 19:30:52 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:27.001 19:30:52 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721071852.XXXXXX 00:02:27.001 19:30:52 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721071852.5Ol2LX 00:02:27.001 19:30:52 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:27.001 19:30:52 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:27.001 19:30:52 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:27.001 19:30:52 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:27.001 19:30:52 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:27.001 19:30:52 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:27.001 19:30:52 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:27.001 19:30:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.001 19:30:52 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:27.259 19:30:52 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:27.259 19:30:52 -- pm/common@17 -- $ local monitor 00:02:27.259 19:30:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.259 19:30:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.259 19:30:52 -- pm/common@25 -- $ sleep 1 00:02:27.259 19:30:52 -- pm/common@21 -- $ date +%s 00:02:27.259 19:30:52 -- pm/common@21 -- $ date +%s 00:02:27.259 19:30:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721071852 00:02:27.259 19:30:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721071852 00:02:27.259 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721071852_collect-vmstat.pm.log 00:02:27.259 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721071852_collect-cpu-load.pm.log 00:02:28.191 19:30:53 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:28.191 19:30:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:28.191 19:30:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:28.191 19:30:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:28.191 19:30:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:28.191 Mon Jul 15 07:30:53 PM UTC 2024 00:02:28.191 19:30:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:28.191 v24.09-pre-230-gc9ef451fa 00:02:28.191 19:30:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:28.191 19:30:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:28.191 19:30:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:28.191 19:30:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:28.191 19:30:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:28.191 19:30:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.191 ************************************ 00:02:28.191 START TEST ubsan 00:02:28.191 ************************************ 00:02:28.191 using ubsan 00:02:28.191 19:30:53 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:28.191 00:02:28.191 real 0m0.000s 00:02:28.191 user 0m0.000s 00:02:28.191 sys 0m0.000s 00:02:28.191 19:30:53 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:28.191 ************************************ 00:02:28.191 END TEST ubsan 00:02:28.191 ************************************ 00:02:28.191 19:30:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:28.191 19:30:53 -- common/autotest_common.sh@1142 -- $ return 0 00:02:28.191 19:30:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:28.191 19:30:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:28.191 19:30:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:28.191 19:30:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:28.191 19:30:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:28.191 19:30:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:28.191 19:30:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:28.191 19:30:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:28.191 19:30:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:28.191 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:28.191 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:28.770 Using 'verbs' RDMA provider 00:02:44.570 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:54.531 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:54.788 go version go1.21.1 linux/amd64 00:02:55.046 Creating mk/config.mk...done. 00:02:55.046 Creating mk/cc.flags.mk...done. 00:02:55.046 Type 'make' to build. 00:02:55.046 19:31:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:55.046 19:31:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:55.046 19:31:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:55.046 19:31:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.046 ************************************ 00:02:55.046 START TEST make 00:02:55.046 ************************************ 00:02:55.046 19:31:20 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:55.610 make[1]: Nothing to be done for 'all'. 00:03:10.553 The Meson build system 00:03:10.553 Version: 1.3.1 00:03:10.553 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:10.553 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:10.553 Build type: native build 00:03:10.553 Program cat found: YES (/usr/bin/cat) 00:03:10.553 Project name: DPDK 00:03:10.553 Project version: 24.03.0 00:03:10.553 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:10.553 C linker for the host machine: cc ld.bfd 2.39-16 00:03:10.553 Host machine cpu family: x86_64 00:03:10.553 Host machine cpu: x86_64 00:03:10.553 Message: ## Building in Developer Mode ## 00:03:10.553 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:10.553 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:10.553 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:10.553 Program python3 found: YES (/usr/bin/python3) 00:03:10.553 Program cat found: YES (/usr/bin/cat) 00:03:10.553 Compiler for C supports arguments -march=native: YES 00:03:10.553 Checking for size of "void *" : 8 00:03:10.553 Checking for size of "void *" : 8 (cached) 00:03:10.553 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:10.553 Library m found: YES 00:03:10.553 Library numa found: YES 00:03:10.553 Has header "numaif.h" : YES 00:03:10.553 Library fdt found: NO 00:03:10.553 Library execinfo found: NO 00:03:10.553 Has header "execinfo.h" : YES 00:03:10.553 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:10.554 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:10.554 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:10.554 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:10.554 Run-time dependency openssl found: YES 3.0.9 00:03:10.554 Run-time dependency libpcap found: YES 1.10.4 00:03:10.554 Has header "pcap.h" with dependency libpcap: YES 00:03:10.554 Compiler for C supports arguments -Wcast-qual: YES 00:03:10.554 Compiler for C supports arguments -Wdeprecated: YES 00:03:10.554 Compiler for C supports arguments -Wformat: YES 00:03:10.554 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:10.554 Compiler for C supports arguments -Wformat-security: NO 00:03:10.554 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.554 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:10.554 Compiler for C supports arguments -Wnested-externs: YES 00:03:10.554 Compiler for C supports arguments -Wold-style-definition: YES 00:03:10.554 Compiler for C supports arguments -Wpointer-arith: YES 00:03:10.554 Compiler for C supports arguments -Wsign-compare: YES 00:03:10.554 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:10.554 Compiler for C supports arguments -Wundef: YES 00:03:10.554 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.554 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:10.554 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:10.554 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.554 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:10.554 Program objdump found: YES (/usr/bin/objdump) 00:03:10.554 Compiler for C supports arguments -mavx512f: YES 00:03:10.554 Checking if "AVX512 checking" compiles: YES 00:03:10.554 Fetching value of define "__SSE4_2__" : 1 00:03:10.554 Fetching value of define "__AES__" : 1 00:03:10.554 Fetching value of define "__AVX__" : 1 00:03:10.554 Fetching value of define "__AVX2__" : 1 00:03:10.554 Fetching value of define "__AVX512BW__" : (undefined) 00:03:10.554 Fetching value of define "__AVX512CD__" : (undefined) 00:03:10.554 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:10.554 Fetching value of define "__AVX512F__" : (undefined) 00:03:10.554 Fetching value of define "__AVX512VL__" : (undefined) 00:03:10.554 Fetching value of define "__PCLMUL__" : 1 00:03:10.554 Fetching value of define "__RDRND__" : 1 00:03:10.554 Fetching value of define "__RDSEED__" : 1 00:03:10.554 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:10.554 Fetching value of define "__znver1__" : (undefined) 00:03:10.554 Fetching value of define "__znver2__" : (undefined) 00:03:10.554 Fetching value of define "__znver3__" : (undefined) 00:03:10.554 Fetching value of define "__znver4__" : (undefined) 00:03:10.554 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:10.554 Message: lib/log: Defining dependency "log" 00:03:10.554 Message: lib/kvargs: Defining dependency "kvargs" 00:03:10.554 Message: lib/telemetry: Defining dependency "telemetry" 00:03:10.554 Checking for function "getentropy" : NO 00:03:10.554 Message: lib/eal: Defining dependency "eal" 00:03:10.554 Message: lib/ring: Defining dependency "ring" 00:03:10.554 Message: lib/rcu: Defining dependency "rcu" 00:03:10.554 Message: lib/mempool: Defining dependency "mempool" 00:03:10.554 Message: lib/mbuf: Defining dependency "mbuf" 00:03:10.554 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:10.554 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:10.554 Compiler for C supports arguments -mpclmul: YES 00:03:10.554 Compiler for C supports arguments -maes: YES 00:03:10.554 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:10.554 Compiler for C supports arguments -mavx512bw: YES 00:03:10.554 Compiler for C supports arguments -mavx512dq: YES 00:03:10.554 Compiler for C supports arguments -mavx512vl: YES 00:03:10.554 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:10.554 Compiler for C supports arguments -mavx2: YES 00:03:10.554 Compiler for C supports arguments -mavx: YES 00:03:10.554 Message: lib/net: Defining dependency "net" 00:03:10.554 Message: lib/meter: Defining dependency "meter" 00:03:10.554 Message: lib/ethdev: Defining dependency "ethdev" 00:03:10.554 Message: lib/pci: Defining dependency "pci" 00:03:10.554 Message: lib/cmdline: Defining dependency "cmdline" 00:03:10.554 Message: lib/hash: Defining dependency "hash" 00:03:10.554 Message: lib/timer: Defining dependency "timer" 00:03:10.554 Message: lib/compressdev: Defining dependency "compressdev" 00:03:10.554 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:10.554 Message: lib/dmadev: Defining dependency "dmadev" 00:03:10.554 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:10.554 Message: lib/power: Defining dependency "power" 00:03:10.554 Message: lib/reorder: Defining dependency "reorder" 00:03:10.554 Message: lib/security: Defining dependency "security" 00:03:10.554 Has header "linux/userfaultfd.h" : YES 00:03:10.554 Has header "linux/vduse.h" : YES 00:03:10.554 Message: lib/vhost: Defining dependency "vhost" 00:03:10.554 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:10.554 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:10.554 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:10.554 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:10.554 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:10.554 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:10.554 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:10.554 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:10.554 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:10.554 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:10.554 Program doxygen found: YES (/usr/bin/doxygen) 00:03:10.554 Configuring doxy-api-html.conf using configuration 00:03:10.554 Configuring doxy-api-man.conf using configuration 00:03:10.554 Program mandb found: YES (/usr/bin/mandb) 00:03:10.554 Program sphinx-build found: NO 00:03:10.554 Configuring rte_build_config.h using configuration 00:03:10.554 Message: 00:03:10.554 ================= 00:03:10.554 Applications Enabled 00:03:10.554 ================= 00:03:10.554 00:03:10.554 apps: 00:03:10.554 00:03:10.554 00:03:10.554 Message: 00:03:10.554 ================= 00:03:10.554 Libraries Enabled 00:03:10.554 ================= 00:03:10.554 00:03:10.554 libs: 00:03:10.554 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:10.554 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:10.554 cryptodev, dmadev, power, reorder, security, vhost, 00:03:10.554 00:03:10.554 Message: 00:03:10.554 =============== 00:03:10.554 Drivers Enabled 00:03:10.554 =============== 00:03:10.554 00:03:10.554 common: 00:03:10.554 00:03:10.554 bus: 00:03:10.554 pci, vdev, 00:03:10.554 mempool: 00:03:10.554 ring, 00:03:10.554 dma: 00:03:10.554 00:03:10.554 net: 00:03:10.554 00:03:10.554 crypto: 00:03:10.554 00:03:10.554 compress: 00:03:10.554 00:03:10.554 vdpa: 00:03:10.554 00:03:10.554 00:03:10.554 Message: 00:03:10.554 ================= 00:03:10.554 Content Skipped 00:03:10.554 ================= 00:03:10.554 00:03:10.554 apps: 00:03:10.554 dumpcap: explicitly disabled via build config 00:03:10.554 graph: explicitly disabled via build config 00:03:10.554 pdump: explicitly disabled via build config 00:03:10.554 proc-info: explicitly disabled via build config 00:03:10.554 test-acl: explicitly disabled via build config 00:03:10.554 test-bbdev: explicitly disabled via build config 00:03:10.554 test-cmdline: explicitly disabled via build config 00:03:10.554 test-compress-perf: explicitly disabled via build config 00:03:10.554 test-crypto-perf: explicitly disabled via build config 00:03:10.554 test-dma-perf: explicitly disabled via build config 00:03:10.554 test-eventdev: explicitly disabled via build config 00:03:10.554 test-fib: explicitly disabled via build config 00:03:10.554 test-flow-perf: explicitly disabled via build config 00:03:10.554 test-gpudev: explicitly disabled via build config 00:03:10.554 test-mldev: explicitly disabled via build config 00:03:10.554 test-pipeline: explicitly disabled via build config 00:03:10.554 test-pmd: explicitly disabled via build config 00:03:10.554 test-regex: explicitly disabled via build config 00:03:10.554 test-sad: explicitly disabled via build config 00:03:10.554 test-security-perf: explicitly disabled via build config 00:03:10.554 00:03:10.554 libs: 00:03:10.554 argparse: explicitly disabled via build config 00:03:10.554 metrics: explicitly disabled via build config 00:03:10.554 acl: explicitly disabled via build config 00:03:10.554 bbdev: explicitly disabled via build config 00:03:10.554 bitratestats: explicitly disabled via build config 00:03:10.554 bpf: explicitly disabled via build config 00:03:10.554 cfgfile: explicitly disabled via build config 00:03:10.554 distributor: explicitly disabled via build config 00:03:10.554 efd: explicitly disabled via build config 00:03:10.554 eventdev: explicitly disabled via build config 00:03:10.554 dispatcher: explicitly disabled via build config 00:03:10.554 gpudev: explicitly disabled via build config 00:03:10.554 gro: explicitly disabled via build config 00:03:10.554 gso: explicitly disabled via build config 00:03:10.554 ip_frag: explicitly disabled via build config 00:03:10.554 jobstats: explicitly disabled via build config 00:03:10.554 latencystats: explicitly disabled via build config 00:03:10.554 lpm: explicitly disabled via build config 00:03:10.554 member: explicitly disabled via build config 00:03:10.554 pcapng: explicitly disabled via build config 00:03:10.554 rawdev: explicitly disabled via build config 00:03:10.554 regexdev: explicitly disabled via build config 00:03:10.554 mldev: explicitly disabled via build config 00:03:10.554 rib: explicitly disabled via build config 00:03:10.554 sched: explicitly disabled via build config 00:03:10.554 stack: explicitly disabled via build config 00:03:10.554 ipsec: explicitly disabled via build config 00:03:10.554 pdcp: explicitly disabled via build config 00:03:10.555 fib: explicitly disabled via build config 00:03:10.555 port: explicitly disabled via build config 00:03:10.555 pdump: explicitly disabled via build config 00:03:10.555 table: explicitly disabled via build config 00:03:10.555 pipeline: explicitly disabled via build config 00:03:10.555 graph: explicitly disabled via build config 00:03:10.555 node: explicitly disabled via build config 00:03:10.555 00:03:10.555 drivers: 00:03:10.555 common/cpt: not in enabled drivers build config 00:03:10.555 common/dpaax: not in enabled drivers build config 00:03:10.555 common/iavf: not in enabled drivers build config 00:03:10.555 common/idpf: not in enabled drivers build config 00:03:10.555 common/ionic: not in enabled drivers build config 00:03:10.555 common/mvep: not in enabled drivers build config 00:03:10.555 common/octeontx: not in enabled drivers build config 00:03:10.555 bus/auxiliary: not in enabled drivers build config 00:03:10.555 bus/cdx: not in enabled drivers build config 00:03:10.555 bus/dpaa: not in enabled drivers build config 00:03:10.555 bus/fslmc: not in enabled drivers build config 00:03:10.555 bus/ifpga: not in enabled drivers build config 00:03:10.555 bus/platform: not in enabled drivers build config 00:03:10.555 bus/uacce: not in enabled drivers build config 00:03:10.555 bus/vmbus: not in enabled drivers build config 00:03:10.555 common/cnxk: not in enabled drivers build config 00:03:10.555 common/mlx5: not in enabled drivers build config 00:03:10.555 common/nfp: not in enabled drivers build config 00:03:10.555 common/nitrox: not in enabled drivers build config 00:03:10.555 common/qat: not in enabled drivers build config 00:03:10.555 common/sfc_efx: not in enabled drivers build config 00:03:10.555 mempool/bucket: not in enabled drivers build config 00:03:10.555 mempool/cnxk: not in enabled drivers build config 00:03:10.555 mempool/dpaa: not in enabled drivers build config 00:03:10.555 mempool/dpaa2: not in enabled drivers build config 00:03:10.555 mempool/octeontx: not in enabled drivers build config 00:03:10.555 mempool/stack: not in enabled drivers build config 00:03:10.555 dma/cnxk: not in enabled drivers build config 00:03:10.555 dma/dpaa: not in enabled drivers build config 00:03:10.555 dma/dpaa2: not in enabled drivers build config 00:03:10.555 dma/hisilicon: not in enabled drivers build config 00:03:10.555 dma/idxd: not in enabled drivers build config 00:03:10.555 dma/ioat: not in enabled drivers build config 00:03:10.555 dma/skeleton: not in enabled drivers build config 00:03:10.555 net/af_packet: not in enabled drivers build config 00:03:10.555 net/af_xdp: not in enabled drivers build config 00:03:10.555 net/ark: not in enabled drivers build config 00:03:10.555 net/atlantic: not in enabled drivers build config 00:03:10.555 net/avp: not in enabled drivers build config 00:03:10.555 net/axgbe: not in enabled drivers build config 00:03:10.555 net/bnx2x: not in enabled drivers build config 00:03:10.555 net/bnxt: not in enabled drivers build config 00:03:10.555 net/bonding: not in enabled drivers build config 00:03:10.555 net/cnxk: not in enabled drivers build config 00:03:10.555 net/cpfl: not in enabled drivers build config 00:03:10.555 net/cxgbe: not in enabled drivers build config 00:03:10.555 net/dpaa: not in enabled drivers build config 00:03:10.555 net/dpaa2: not in enabled drivers build config 00:03:10.555 net/e1000: not in enabled drivers build config 00:03:10.555 net/ena: not in enabled drivers build config 00:03:10.555 net/enetc: not in enabled drivers build config 00:03:10.555 net/enetfec: not in enabled drivers build config 00:03:10.555 net/enic: not in enabled drivers build config 00:03:10.555 net/failsafe: not in enabled drivers build config 00:03:10.555 net/fm10k: not in enabled drivers build config 00:03:10.555 net/gve: not in enabled drivers build config 00:03:10.555 net/hinic: not in enabled drivers build config 00:03:10.555 net/hns3: not in enabled drivers build config 00:03:10.555 net/i40e: not in enabled drivers build config 00:03:10.555 net/iavf: not in enabled drivers build config 00:03:10.555 net/ice: not in enabled drivers build config 00:03:10.555 net/idpf: not in enabled drivers build config 00:03:10.555 net/igc: not in enabled drivers build config 00:03:10.555 net/ionic: not in enabled drivers build config 00:03:10.555 net/ipn3ke: not in enabled drivers build config 00:03:10.555 net/ixgbe: not in enabled drivers build config 00:03:10.555 net/mana: not in enabled drivers build config 00:03:10.555 net/memif: not in enabled drivers build config 00:03:10.555 net/mlx4: not in enabled drivers build config 00:03:10.555 net/mlx5: not in enabled drivers build config 00:03:10.555 net/mvneta: not in enabled drivers build config 00:03:10.555 net/mvpp2: not in enabled drivers build config 00:03:10.555 net/netvsc: not in enabled drivers build config 00:03:10.555 net/nfb: not in enabled drivers build config 00:03:10.555 net/nfp: not in enabled drivers build config 00:03:10.555 net/ngbe: not in enabled drivers build config 00:03:10.555 net/null: not in enabled drivers build config 00:03:10.555 net/octeontx: not in enabled drivers build config 00:03:10.555 net/octeon_ep: not in enabled drivers build config 00:03:10.555 net/pcap: not in enabled drivers build config 00:03:10.555 net/pfe: not in enabled drivers build config 00:03:10.555 net/qede: not in enabled drivers build config 00:03:10.555 net/ring: not in enabled drivers build config 00:03:10.555 net/sfc: not in enabled drivers build config 00:03:10.555 net/softnic: not in enabled drivers build config 00:03:10.555 net/tap: not in enabled drivers build config 00:03:10.555 net/thunderx: not in enabled drivers build config 00:03:10.555 net/txgbe: not in enabled drivers build config 00:03:10.555 net/vdev_netvsc: not in enabled drivers build config 00:03:10.555 net/vhost: not in enabled drivers build config 00:03:10.555 net/virtio: not in enabled drivers build config 00:03:10.555 net/vmxnet3: not in enabled drivers build config 00:03:10.555 raw/*: missing internal dependency, "rawdev" 00:03:10.555 crypto/armv8: not in enabled drivers build config 00:03:10.555 crypto/bcmfs: not in enabled drivers build config 00:03:10.555 crypto/caam_jr: not in enabled drivers build config 00:03:10.555 crypto/ccp: not in enabled drivers build config 00:03:10.555 crypto/cnxk: not in enabled drivers build config 00:03:10.555 crypto/dpaa_sec: not in enabled drivers build config 00:03:10.555 crypto/dpaa2_sec: not in enabled drivers build config 00:03:10.555 crypto/ipsec_mb: not in enabled drivers build config 00:03:10.555 crypto/mlx5: not in enabled drivers build config 00:03:10.555 crypto/mvsam: not in enabled drivers build config 00:03:10.555 crypto/nitrox: not in enabled drivers build config 00:03:10.555 crypto/null: not in enabled drivers build config 00:03:10.555 crypto/octeontx: not in enabled drivers build config 00:03:10.555 crypto/openssl: not in enabled drivers build config 00:03:10.555 crypto/scheduler: not in enabled drivers build config 00:03:10.555 crypto/uadk: not in enabled drivers build config 00:03:10.555 crypto/virtio: not in enabled drivers build config 00:03:10.555 compress/isal: not in enabled drivers build config 00:03:10.555 compress/mlx5: not in enabled drivers build config 00:03:10.555 compress/nitrox: not in enabled drivers build config 00:03:10.555 compress/octeontx: not in enabled drivers build config 00:03:10.555 compress/zlib: not in enabled drivers build config 00:03:10.555 regex/*: missing internal dependency, "regexdev" 00:03:10.555 ml/*: missing internal dependency, "mldev" 00:03:10.555 vdpa/ifc: not in enabled drivers build config 00:03:10.555 vdpa/mlx5: not in enabled drivers build config 00:03:10.555 vdpa/nfp: not in enabled drivers build config 00:03:10.555 vdpa/sfc: not in enabled drivers build config 00:03:10.555 event/*: missing internal dependency, "eventdev" 00:03:10.555 baseband/*: missing internal dependency, "bbdev" 00:03:10.555 gpu/*: missing internal dependency, "gpudev" 00:03:10.555 00:03:10.555 00:03:10.555 Build targets in project: 85 00:03:10.555 00:03:10.555 DPDK 24.03.0 00:03:10.555 00:03:10.555 User defined options 00:03:10.555 buildtype : debug 00:03:10.555 default_library : shared 00:03:10.555 libdir : lib 00:03:10.555 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:10.555 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:10.555 c_link_args : 00:03:10.555 cpu_instruction_set: native 00:03:10.555 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:10.555 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:10.555 enable_docs : false 00:03:10.555 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:10.555 enable_kmods : false 00:03:10.555 max_lcores : 128 00:03:10.555 tests : false 00:03:10.555 00:03:10.555 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.555 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:10.555 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:10.555 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:10.555 [3/268] Linking static target lib/librte_kvargs.a 00:03:10.555 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:10.555 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:10.555 [6/268] Linking static target lib/librte_log.a 00:03:10.555 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.555 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:10.555 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:10.555 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.555 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:10.555 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:10.555 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.555 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.555 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.555 [16/268] Linking static target lib/librte_telemetry.a 00:03:10.555 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.555 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:10.556 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.556 [20/268] Linking target lib/librte_log.so.24.1 00:03:10.813 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:10.813 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:11.070 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:11.070 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:11.327 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:11.327 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:11.327 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:11.327 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:11.327 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:11.327 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.585 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:11.585 [32/268] Linking target lib/librte_telemetry.so.24.1 00:03:11.585 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:11.585 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:11.842 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:11.842 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:11.842 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:12.098 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:12.098 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:12.098 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:12.355 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:12.355 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:12.355 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:12.355 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:12.355 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:12.625 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:12.625 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:12.625 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:12.625 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:12.883 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:12.883 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:12.883 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:13.142 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:13.142 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:13.401 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:13.401 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:13.401 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:13.401 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:13.659 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:13.659 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:13.659 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:13.659 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:13.659 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:13.915 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:13.916 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:14.172 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:14.429 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:14.429 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:14.429 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:14.429 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:14.686 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:14.686 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:14.686 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:14.686 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:14.686 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:14.686 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:14.943 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:15.201 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:15.201 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:15.201 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:15.458 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:15.458 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:15.458 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:15.458 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:15.726 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:15.726 [86/268] Linking static target lib/librte_ring.a 00:03:15.726 [87/268] Linking static target lib/librte_eal.a 00:03:15.726 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:15.984 [89/268] Linking static target lib/librte_rcu.a 00:03:15.984 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:15.984 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:15.984 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:16.242 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.242 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:16.242 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:16.242 [96/268] Linking static target lib/librte_mempool.a 00:03:16.242 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:16.242 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.499 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:16.499 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:16.499 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:16.499 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:16.499 [103/268] Linking static target lib/librte_mbuf.a 00:03:16.756 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:16.756 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:17.015 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:17.015 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:17.015 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:17.015 [109/268] Linking static target lib/librte_net.a 00:03:17.015 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:17.015 [111/268] Linking static target lib/librte_meter.a 00:03:17.273 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:17.531 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.531 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:17.531 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:17.531 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.531 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.531 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.789 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:18.046 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:18.304 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:18.563 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:18.563 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:18.563 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:18.822 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:18.822 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:18.822 [127/268] Linking static target lib/librte_pci.a 00:03:18.822 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:18.822 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:18.822 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:18.822 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:19.080 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:19.080 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:19.080 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:19.080 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.080 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:19.080 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:19.338 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:19.338 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:19.338 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:19.338 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:19.338 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:19.338 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:19.338 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:19.338 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:19.338 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:19.927 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:19.927 [148/268] Linking static target lib/librte_ethdev.a 00:03:19.927 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:19.927 [150/268] Linking static target lib/librte_cmdline.a 00:03:19.927 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:19.927 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:19.927 [153/268] Linking static target lib/librte_timer.a 00:03:19.927 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:20.186 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:20.186 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.186 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:20.186 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:20.186 [159/268] Linking static target lib/librte_hash.a 00:03:20.753 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.753 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:20.753 [162/268] Linking static target lib/librte_compressdev.a 00:03:20.753 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:20.753 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:21.011 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:21.011 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:21.011 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:21.269 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:21.269 [169/268] Linking static target lib/librte_dmadev.a 00:03:21.527 [170/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.527 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:21.527 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.527 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:21.527 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:21.527 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:21.527 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.527 [177/268] Linking static target lib/librte_cryptodev.a 00:03:21.785 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:22.044 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:22.045 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:22.045 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:22.045 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.045 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:22.045 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:22.304 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:22.304 [186/268] Linking static target lib/librte_power.a 00:03:22.304 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:22.304 [188/268] Linking static target lib/librte_reorder.a 00:03:22.617 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:22.877 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:22.877 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:22.877 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:22.877 [193/268] Linking static target lib/librte_security.a 00:03:22.877 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.135 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:23.393 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.393 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.393 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:23.393 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:23.668 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:23.668 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.668 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:23.926 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:23.926 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:23.926 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:23.926 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:24.184 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:24.184 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:24.184 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:24.184 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:24.184 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:24.442 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.442 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:24.442 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.442 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.442 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:24.442 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:24.442 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.442 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.442 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:24.442 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:24.442 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:24.701 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.701 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:24.701 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:24.701 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:24.701 [227/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:25.268 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.834 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:25.834 [230/268] Linking static target lib/librte_vhost.a 00:03:26.399 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.399 [232/268] Linking target lib/librte_eal.so.24.1 00:03:26.655 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:26.655 [234/268] Linking target lib/librte_timer.so.24.1 00:03:26.655 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:26.655 [236/268] Linking target lib/librte_ring.so.24.1 00:03:26.655 [237/268] Linking target lib/librte_pci.so.24.1 00:03:26.655 [238/268] Linking target lib/librte_meter.so.24.1 00:03:26.655 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:26.913 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:26.913 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:26.913 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:26.913 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:26.913 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:26.913 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:26.913 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:26.913 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:26.913 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:26.913 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:27.171 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:27.171 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:27.171 [252/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.171 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.171 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:27.171 [255/268] Linking target lib/librte_net.so.24.1 00:03:27.171 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:27.171 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:27.171 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:27.428 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:27.429 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:27.429 [261/268] Linking target lib/librte_security.so.24.1 00:03:27.429 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:27.429 [263/268] Linking target lib/librte_hash.so.24.1 00:03:27.429 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:27.686 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:27.686 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:27.686 [267/268] Linking target lib/librte_power.so.24.1 00:03:27.686 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:27.686 INFO: autodetecting backend as ninja 00:03:27.686 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:29.058 CC lib/log/log.o 00:03:29.058 CC lib/log/log_flags.o 00:03:29.058 CC lib/log/log_deprecated.o 00:03:29.058 CC lib/ut/ut.o 00:03:29.058 CC lib/ut_mock/mock.o 00:03:29.315 LIB libspdk_log.a 00:03:29.315 LIB libspdk_ut.a 00:03:29.315 LIB libspdk_ut_mock.a 00:03:29.315 SO libspdk_ut.so.2.0 00:03:29.315 SO libspdk_log.so.7.0 00:03:29.315 SO libspdk_ut_mock.so.6.0 00:03:29.316 SYMLINK libspdk_ut.so 00:03:29.316 SYMLINK libspdk_ut_mock.so 00:03:29.316 SYMLINK libspdk_log.so 00:03:29.573 CC lib/util/base64.o 00:03:29.573 CC lib/util/bit_array.o 00:03:29.573 CC lib/util/cpuset.o 00:03:29.573 CC lib/util/crc16.o 00:03:29.573 CC lib/util/crc32.o 00:03:29.573 CXX lib/trace_parser/trace.o 00:03:29.573 CC lib/dma/dma.o 00:03:29.573 CC lib/util/crc32c.o 00:03:29.573 CC lib/ioat/ioat.o 00:03:29.573 CC lib/vfio_user/host/vfio_user_pci.o 00:03:29.830 CC lib/util/crc32_ieee.o 00:03:29.830 CC lib/vfio_user/host/vfio_user.o 00:03:29.830 CC lib/util/crc64.o 00:03:29.830 CC lib/util/dif.o 00:03:29.830 CC lib/util/fd.o 00:03:29.830 LIB libspdk_dma.a 00:03:29.830 SO libspdk_dma.so.4.0 00:03:29.830 CC lib/util/fd_group.o 00:03:29.830 LIB libspdk_ioat.a 00:03:29.830 SYMLINK libspdk_dma.so 00:03:29.830 CC lib/util/file.o 00:03:29.830 CC lib/util/hexlify.o 00:03:29.830 SO libspdk_ioat.so.7.0 00:03:29.830 CC lib/util/iov.o 00:03:29.830 CC lib/util/math.o 00:03:29.830 SYMLINK libspdk_ioat.so 00:03:29.830 CC lib/util/net.o 00:03:30.087 CC lib/util/pipe.o 00:03:30.087 LIB libspdk_vfio_user.a 00:03:30.087 CC lib/util/strerror_tls.o 00:03:30.087 SO libspdk_vfio_user.so.5.0 00:03:30.087 CC lib/util/string.o 00:03:30.087 CC lib/util/uuid.o 00:03:30.087 CC lib/util/xor.o 00:03:30.087 CC lib/util/zipf.o 00:03:30.087 SYMLINK libspdk_vfio_user.so 00:03:30.345 LIB libspdk_util.a 00:03:30.345 SO libspdk_util.so.9.1 00:03:30.602 LIB libspdk_trace_parser.a 00:03:30.602 SYMLINK libspdk_util.so 00:03:30.602 SO libspdk_trace_parser.so.5.0 00:03:30.860 SYMLINK libspdk_trace_parser.so 00:03:30.860 CC lib/rdma_utils/rdma_utils.o 00:03:30.860 CC lib/json/json_parse.o 00:03:30.860 CC lib/json/json_util.o 00:03:30.860 CC lib/json/json_write.o 00:03:30.860 CC lib/vmd/vmd.o 00:03:30.860 CC lib/vmd/led.o 00:03:30.860 CC lib/idxd/idxd.o 00:03:30.860 CC lib/rdma_provider/common.o 00:03:30.860 CC lib/conf/conf.o 00:03:30.860 CC lib/env_dpdk/env.o 00:03:31.117 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:31.117 CC lib/env_dpdk/memory.o 00:03:31.117 LIB libspdk_conf.a 00:03:31.117 CC lib/env_dpdk/pci.o 00:03:31.117 CC lib/env_dpdk/init.o 00:03:31.117 SO libspdk_conf.so.6.0 00:03:31.117 LIB libspdk_json.a 00:03:31.117 SYMLINK libspdk_conf.so 00:03:31.117 CC lib/env_dpdk/threads.o 00:03:31.117 LIB libspdk_rdma_utils.a 00:03:31.117 LIB libspdk_rdma_provider.a 00:03:31.117 SO libspdk_json.so.6.0 00:03:31.117 SO libspdk_rdma_utils.so.1.0 00:03:31.375 SO libspdk_rdma_provider.so.6.0 00:03:31.375 SYMLINK libspdk_json.so 00:03:31.375 CC lib/env_dpdk/pci_ioat.o 00:03:31.375 SYMLINK libspdk_rdma_utils.so 00:03:31.375 SYMLINK libspdk_rdma_provider.so 00:03:31.375 CC lib/env_dpdk/pci_virtio.o 00:03:31.375 CC lib/env_dpdk/pci_vmd.o 00:03:31.375 LIB libspdk_vmd.a 00:03:31.375 CC lib/env_dpdk/pci_idxd.o 00:03:31.375 CC lib/env_dpdk/pci_event.o 00:03:31.375 CC lib/env_dpdk/sigbus_handler.o 00:03:31.375 CC lib/idxd/idxd_user.o 00:03:31.375 CC lib/jsonrpc/jsonrpc_server.o 00:03:31.375 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:31.632 CC lib/jsonrpc/jsonrpc_client.o 00:03:31.632 SO libspdk_vmd.so.6.0 00:03:31.632 CC lib/idxd/idxd_kernel.o 00:03:31.632 SYMLINK libspdk_vmd.so 00:03:31.632 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:31.632 CC lib/env_dpdk/pci_dpdk.o 00:03:31.632 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:31.632 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:31.890 LIB libspdk_idxd.a 00:03:31.890 LIB libspdk_jsonrpc.a 00:03:31.890 SO libspdk_idxd.so.12.0 00:03:31.890 SO libspdk_jsonrpc.so.6.0 00:03:31.890 SYMLINK libspdk_idxd.so 00:03:31.890 SYMLINK libspdk_jsonrpc.so 00:03:32.147 CC lib/rpc/rpc.o 00:03:32.147 LIB libspdk_env_dpdk.a 00:03:32.404 SO libspdk_env_dpdk.so.15.0 00:03:32.404 LIB libspdk_rpc.a 00:03:32.404 SO libspdk_rpc.so.6.0 00:03:32.661 SYMLINK libspdk_rpc.so 00:03:32.661 SYMLINK libspdk_env_dpdk.so 00:03:32.661 CC lib/trace/trace.o 00:03:32.661 CC lib/trace/trace_flags.o 00:03:32.917 CC lib/trace/trace_rpc.o 00:03:32.917 CC lib/notify/notify_rpc.o 00:03:32.917 CC lib/notify/notify.o 00:03:32.917 CC lib/keyring/keyring.o 00:03:32.917 CC lib/keyring/keyring_rpc.o 00:03:32.917 LIB libspdk_notify.a 00:03:32.917 SO libspdk_notify.so.6.0 00:03:33.174 SYMLINK libspdk_notify.so 00:03:33.174 LIB libspdk_keyring.a 00:03:33.174 LIB libspdk_trace.a 00:03:33.174 SO libspdk_keyring.so.1.0 00:03:33.174 SO libspdk_trace.so.10.0 00:03:33.174 SYMLINK libspdk_keyring.so 00:03:33.174 SYMLINK libspdk_trace.so 00:03:33.431 CC lib/sock/sock_rpc.o 00:03:33.431 CC lib/sock/sock.o 00:03:33.431 CC lib/thread/thread.o 00:03:33.431 CC lib/thread/iobuf.o 00:03:33.994 LIB libspdk_sock.a 00:03:33.994 SO libspdk_sock.so.10.0 00:03:34.252 SYMLINK libspdk_sock.so 00:03:34.509 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:34.509 CC lib/nvme/nvme_ctrlr.o 00:03:34.509 CC lib/nvme/nvme_fabric.o 00:03:34.509 CC lib/nvme/nvme_ns_cmd.o 00:03:34.509 CC lib/nvme/nvme_ns.o 00:03:34.509 CC lib/nvme/nvme_pcie_common.o 00:03:34.509 CC lib/nvme/nvme_pcie.o 00:03:34.509 CC lib/nvme/nvme.o 00:03:34.509 CC lib/nvme/nvme_qpair.o 00:03:35.075 LIB libspdk_thread.a 00:03:35.075 SO libspdk_thread.so.10.1 00:03:35.332 CC lib/nvme/nvme_quirks.o 00:03:35.332 CC lib/nvme/nvme_transport.o 00:03:35.332 SYMLINK libspdk_thread.so 00:03:35.332 CC lib/nvme/nvme_discovery.o 00:03:35.332 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:35.332 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:35.332 CC lib/nvme/nvme_tcp.o 00:03:35.590 CC lib/nvme/nvme_opal.o 00:03:35.590 CC lib/nvme/nvme_io_msg.o 00:03:35.590 CC lib/accel/accel.o 00:03:35.847 CC lib/accel/accel_rpc.o 00:03:35.847 CC lib/accel/accel_sw.o 00:03:36.104 CC lib/nvme/nvme_poll_group.o 00:03:36.104 CC lib/nvme/nvme_zns.o 00:03:36.105 CC lib/blob/blobstore.o 00:03:36.105 CC lib/nvme/nvme_stubs.o 00:03:36.362 CC lib/nvme/nvme_auth.o 00:03:36.362 CC lib/init/json_config.o 00:03:36.362 CC lib/virtio/virtio.o 00:03:36.620 CC lib/init/subsystem.o 00:03:36.620 LIB libspdk_accel.a 00:03:36.620 CC lib/virtio/virtio_vhost_user.o 00:03:36.620 SO libspdk_accel.so.15.1 00:03:36.620 SYMLINK libspdk_accel.so 00:03:36.620 CC lib/virtio/virtio_vfio_user.o 00:03:36.879 CC lib/init/subsystem_rpc.o 00:03:36.879 CC lib/virtio/virtio_pci.o 00:03:36.879 CC lib/nvme/nvme_cuse.o 00:03:36.879 CC lib/init/rpc.o 00:03:36.879 CC lib/nvme/nvme_rdma.o 00:03:36.879 CC lib/blob/request.o 00:03:36.879 CC lib/blob/zeroes.o 00:03:36.879 CC lib/bdev/bdev.o 00:03:37.137 CC lib/bdev/bdev_rpc.o 00:03:37.137 LIB libspdk_init.a 00:03:37.137 SO libspdk_init.so.5.0 00:03:37.137 LIB libspdk_virtio.a 00:03:37.137 CC lib/blob/blob_bs_dev.o 00:03:37.137 SO libspdk_virtio.so.7.0 00:03:37.137 SYMLINK libspdk_init.so 00:03:37.137 CC lib/bdev/bdev_zone.o 00:03:37.137 CC lib/bdev/part.o 00:03:37.137 SYMLINK libspdk_virtio.so 00:03:37.137 CC lib/bdev/scsi_nvme.o 00:03:37.396 CC lib/event/app.o 00:03:37.396 CC lib/event/reactor.o 00:03:37.396 CC lib/event/log_rpc.o 00:03:37.396 CC lib/event/app_rpc.o 00:03:37.396 CC lib/event/scheduler_static.o 00:03:37.973 LIB libspdk_event.a 00:03:37.973 SO libspdk_event.so.14.0 00:03:37.973 SYMLINK libspdk_event.so 00:03:38.231 LIB libspdk_nvme.a 00:03:38.489 SO libspdk_nvme.so.13.1 00:03:38.747 SYMLINK libspdk_nvme.so 00:03:39.680 LIB libspdk_blob.a 00:03:39.680 SO libspdk_blob.so.11.0 00:03:39.680 SYMLINK libspdk_blob.so 00:03:39.680 LIB libspdk_bdev.a 00:03:39.680 SO libspdk_bdev.so.15.1 00:03:39.938 SYMLINK libspdk_bdev.so 00:03:39.938 CC lib/blobfs/tree.o 00:03:39.938 CC lib/blobfs/blobfs.o 00:03:39.938 CC lib/lvol/lvol.o 00:03:39.938 CC lib/nbd/nbd.o 00:03:39.938 CC lib/nbd/nbd_rpc.o 00:03:39.938 CC lib/scsi/dev.o 00:03:39.938 CC lib/nvmf/ctrlr.o 00:03:39.938 CC lib/scsi/lun.o 00:03:39.938 CC lib/ublk/ublk.o 00:03:39.938 CC lib/ftl/ftl_core.o 00:03:39.938 CC lib/ftl/ftl_init.o 00:03:40.196 CC lib/ftl/ftl_layout.o 00:03:40.196 CC lib/ftl/ftl_debug.o 00:03:40.196 CC lib/ftl/ftl_io.o 00:03:40.458 CC lib/scsi/port.o 00:03:40.458 LIB libspdk_nbd.a 00:03:40.458 CC lib/nvmf/ctrlr_discovery.o 00:03:40.458 SO libspdk_nbd.so.7.0 00:03:40.458 CC lib/ublk/ublk_rpc.o 00:03:40.458 CC lib/scsi/scsi.o 00:03:40.458 SYMLINK libspdk_nbd.so 00:03:40.458 CC lib/nvmf/ctrlr_bdev.o 00:03:40.458 CC lib/ftl/ftl_sb.o 00:03:40.717 CC lib/scsi/scsi_bdev.o 00:03:40.717 LIB libspdk_blobfs.a 00:03:40.717 CC lib/scsi/scsi_pr.o 00:03:40.717 CC lib/ftl/ftl_l2p.o 00:03:40.717 LIB libspdk_ublk.a 00:03:40.717 SO libspdk_blobfs.so.10.0 00:03:40.717 SO libspdk_ublk.so.3.0 00:03:40.717 SYMLINK libspdk_blobfs.so 00:03:40.717 CC lib/scsi/scsi_rpc.o 00:03:40.717 SYMLINK libspdk_ublk.so 00:03:40.717 CC lib/nvmf/subsystem.o 00:03:40.717 CC lib/ftl/ftl_l2p_flat.o 00:03:40.717 LIB libspdk_lvol.a 00:03:40.975 SO libspdk_lvol.so.10.0 00:03:40.975 CC lib/scsi/task.o 00:03:40.975 SYMLINK libspdk_lvol.so 00:03:40.975 CC lib/nvmf/nvmf.o 00:03:40.975 CC lib/ftl/ftl_nv_cache.o 00:03:40.975 CC lib/nvmf/nvmf_rpc.o 00:03:40.975 CC lib/nvmf/transport.o 00:03:40.975 CC lib/nvmf/tcp.o 00:03:40.975 CC lib/nvmf/stubs.o 00:03:41.234 LIB libspdk_scsi.a 00:03:41.234 CC lib/nvmf/mdns_server.o 00:03:41.234 SO libspdk_scsi.so.9.0 00:03:41.234 SYMLINK libspdk_scsi.so 00:03:41.493 CC lib/iscsi/conn.o 00:03:41.493 CC lib/nvmf/rdma.o 00:03:41.750 CC lib/nvmf/auth.o 00:03:41.750 CC lib/iscsi/init_grp.o 00:03:41.750 CC lib/ftl/ftl_band.o 00:03:41.750 CC lib/iscsi/iscsi.o 00:03:41.750 CC lib/vhost/vhost.o 00:03:42.008 CC lib/ftl/ftl_band_ops.o 00:03:42.008 CC lib/iscsi/md5.o 00:03:42.008 CC lib/iscsi/param.o 00:03:42.008 CC lib/ftl/ftl_writer.o 00:03:42.266 CC lib/ftl/ftl_rq.o 00:03:42.266 CC lib/ftl/ftl_reloc.o 00:03:42.266 CC lib/ftl/ftl_l2p_cache.o 00:03:42.266 CC lib/iscsi/portal_grp.o 00:03:42.266 CC lib/iscsi/tgt_node.o 00:03:42.525 CC lib/iscsi/iscsi_subsystem.o 00:03:42.525 CC lib/iscsi/iscsi_rpc.o 00:03:42.525 CC lib/vhost/vhost_rpc.o 00:03:42.525 CC lib/vhost/vhost_scsi.o 00:03:42.525 CC lib/vhost/vhost_blk.o 00:03:42.525 CC lib/ftl/ftl_p2l.o 00:03:42.784 CC lib/iscsi/task.o 00:03:42.784 CC lib/ftl/mngt/ftl_mngt.o 00:03:42.784 CC lib/vhost/rte_vhost_user.o 00:03:43.042 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:43.042 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:43.042 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:43.042 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:43.339 LIB libspdk_iscsi.a 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:43.339 SO libspdk_iscsi.so.8.0 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.339 SYMLINK libspdk_iscsi.so 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.339 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:43.339 CC lib/ftl/utils/ftl_conf.o 00:03:43.597 LIB libspdk_nvmf.a 00:03:43.597 CC lib/ftl/utils/ftl_md.o 00:03:43.597 CC lib/ftl/utils/ftl_mempool.o 00:03:43.597 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.597 CC lib/ftl/utils/ftl_property.o 00:03:43.597 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:43.597 SO libspdk_nvmf.so.19.0 00:03:43.855 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:43.855 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:43.855 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:43.855 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:43.855 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:43.855 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:43.855 SYMLINK libspdk_nvmf.so 00:03:43.855 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:43.855 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:43.855 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.113 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.113 LIB libspdk_vhost.a 00:03:44.113 CC lib/ftl/base/ftl_base_dev.o 00:03:44.113 CC lib/ftl/base/ftl_base_bdev.o 00:03:44.113 CC lib/ftl/ftl_trace.o 00:03:44.113 SO libspdk_vhost.so.8.0 00:03:44.113 SYMLINK libspdk_vhost.so 00:03:44.371 LIB libspdk_ftl.a 00:03:44.629 SO libspdk_ftl.so.9.0 00:03:44.887 SYMLINK libspdk_ftl.so 00:03:45.453 CC module/env_dpdk/env_dpdk_rpc.o 00:03:45.453 CC module/accel/ioat/accel_ioat.o 00:03:45.453 CC module/accel/error/accel_error.o 00:03:45.453 CC module/keyring/file/keyring.o 00:03:45.453 CC module/accel/dsa/accel_dsa.o 00:03:45.453 CC module/blob/bdev/blob_bdev.o 00:03:45.453 CC module/keyring/linux/keyring.o 00:03:45.453 CC module/accel/iaa/accel_iaa.o 00:03:45.453 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:45.453 CC module/sock/posix/posix.o 00:03:45.453 LIB libspdk_env_dpdk_rpc.a 00:03:45.453 SO libspdk_env_dpdk_rpc.so.6.0 00:03:45.453 CC module/keyring/file/keyring_rpc.o 00:03:45.453 CC module/keyring/linux/keyring_rpc.o 00:03:45.453 SYMLINK libspdk_env_dpdk_rpc.so 00:03:45.453 CC module/accel/error/accel_error_rpc.o 00:03:45.453 CC module/accel/ioat/accel_ioat_rpc.o 00:03:45.712 CC module/accel/dsa/accel_dsa_rpc.o 00:03:45.712 CC module/accel/iaa/accel_iaa_rpc.o 00:03:45.712 LIB libspdk_scheduler_dynamic.a 00:03:45.712 SO libspdk_scheduler_dynamic.so.4.0 00:03:45.712 LIB libspdk_keyring_linux.a 00:03:45.712 LIB libspdk_keyring_file.a 00:03:45.712 LIB libspdk_blob_bdev.a 00:03:45.712 SO libspdk_keyring_linux.so.1.0 00:03:45.712 SO libspdk_keyring_file.so.1.0 00:03:45.712 LIB libspdk_accel_error.a 00:03:45.712 LIB libspdk_accel_ioat.a 00:03:45.712 SYMLINK libspdk_scheduler_dynamic.so 00:03:45.712 SO libspdk_blob_bdev.so.11.0 00:03:45.712 LIB libspdk_accel_iaa.a 00:03:45.712 SO libspdk_accel_error.so.2.0 00:03:45.712 LIB libspdk_accel_dsa.a 00:03:45.712 SO libspdk_accel_ioat.so.6.0 00:03:45.712 SO libspdk_accel_iaa.so.3.0 00:03:45.712 SYMLINK libspdk_keyring_file.so 00:03:45.712 SYMLINK libspdk_keyring_linux.so 00:03:45.712 SO libspdk_accel_dsa.so.5.0 00:03:45.712 SYMLINK libspdk_blob_bdev.so 00:03:45.971 SYMLINK libspdk_accel_error.so 00:03:45.971 SYMLINK libspdk_accel_ioat.so 00:03:45.971 SYMLINK libspdk_accel_iaa.so 00:03:45.971 SYMLINK libspdk_accel_dsa.so 00:03:45.971 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:45.971 CC module/scheduler/gscheduler/gscheduler.o 00:03:45.971 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.229 LIB libspdk_scheduler_gscheduler.a 00:03:46.229 CC module/bdev/error/vbdev_error.o 00:03:46.229 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:46.229 CC module/bdev/delay/vbdev_delay.o 00:03:46.229 CC module/bdev/null/bdev_null.o 00:03:46.229 CC module/bdev/lvol/vbdev_lvol.o 00:03:46.229 CC module/blobfs/bdev/blobfs_bdev.o 00:03:46.229 CC module/bdev/gpt/gpt.o 00:03:46.229 CC module/bdev/malloc/bdev_malloc.o 00:03:46.229 SO libspdk_scheduler_gscheduler.so.4.0 00:03:46.229 LIB libspdk_sock_posix.a 00:03:46.229 SO libspdk_sock_posix.so.6.0 00:03:46.229 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.229 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:46.229 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.229 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:46.229 SYMLINK libspdk_sock_posix.so 00:03:46.229 CC module/bdev/error/vbdev_error_rpc.o 00:03:46.229 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:46.229 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.488 CC module/bdev/null/bdev_null_rpc.o 00:03:46.488 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.488 LIB libspdk_bdev_error.a 00:03:46.488 SO libspdk_bdev_error.so.6.0 00:03:46.488 LIB libspdk_blobfs_bdev.a 00:03:46.488 LIB libspdk_bdev_malloc.a 00:03:46.488 SO libspdk_blobfs_bdev.so.6.0 00:03:46.488 SO libspdk_bdev_malloc.so.6.0 00:03:46.488 SYMLINK libspdk_bdev_error.so 00:03:46.488 CC module/bdev/nvme/bdev_nvme.o 00:03:46.488 LIB libspdk_bdev_null.a 00:03:46.488 SYMLINK libspdk_blobfs_bdev.so 00:03:46.488 LIB libspdk_bdev_delay.a 00:03:46.488 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:46.488 SYMLINK libspdk_bdev_malloc.so 00:03:46.488 CC module/bdev/nvme/nvme_rpc.o 00:03:46.746 LIB libspdk_bdev_gpt.a 00:03:46.746 SO libspdk_bdev_delay.so.6.0 00:03:46.746 LIB libspdk_bdev_lvol.a 00:03:46.746 SO libspdk_bdev_null.so.6.0 00:03:46.746 SO libspdk_bdev_gpt.so.6.0 00:03:46.746 SO libspdk_bdev_lvol.so.6.0 00:03:46.746 CC module/bdev/passthru/vbdev_passthru.o 00:03:46.746 SYMLINK libspdk_bdev_delay.so 00:03:46.746 SYMLINK libspdk_bdev_null.so 00:03:46.746 CC module/bdev/nvme/bdev_mdns_client.o 00:03:46.746 CC module/bdev/raid/bdev_raid.o 00:03:46.746 SYMLINK libspdk_bdev_gpt.so 00:03:46.746 SYMLINK libspdk_bdev_lvol.so 00:03:46.746 CC module/bdev/nvme/vbdev_opal.o 00:03:46.746 CC module/bdev/split/vbdev_split.o 00:03:46.746 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:47.004 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:47.004 CC module/bdev/aio/bdev_aio.o 00:03:47.004 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:47.004 CC module/bdev/split/vbdev_split_rpc.o 00:03:47.004 CC module/bdev/aio/bdev_aio_rpc.o 00:03:47.004 LIB libspdk_bdev_passthru.a 00:03:47.004 SO libspdk_bdev_passthru.so.6.0 00:03:47.263 SYMLINK libspdk_bdev_passthru.so 00:03:47.263 LIB libspdk_bdev_split.a 00:03:47.263 CC module/bdev/raid/bdev_raid_rpc.o 00:03:47.263 CC module/bdev/ftl/bdev_ftl.o 00:03:47.263 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:47.263 SO libspdk_bdev_split.so.6.0 00:03:47.263 LIB libspdk_bdev_zone_block.a 00:03:47.263 LIB libspdk_bdev_aio.a 00:03:47.263 SO libspdk_bdev_zone_block.so.6.0 00:03:47.263 SYMLINK libspdk_bdev_split.so 00:03:47.263 SO libspdk_bdev_aio.so.6.0 00:03:47.263 CC module/bdev/raid/bdev_raid_sb.o 00:03:47.263 CC module/bdev/iscsi/bdev_iscsi.o 00:03:47.263 SYMLINK libspdk_bdev_zone_block.so 00:03:47.263 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:47.263 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:47.522 SYMLINK libspdk_bdev_aio.so 00:03:47.522 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:47.522 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:47.522 CC module/bdev/raid/raid0.o 00:03:47.522 LIB libspdk_bdev_ftl.a 00:03:47.522 SO libspdk_bdev_ftl.so.6.0 00:03:47.522 CC module/bdev/raid/raid1.o 00:03:47.522 SYMLINK libspdk_bdev_ftl.so 00:03:47.522 CC module/bdev/raid/concat.o 00:03:47.779 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:47.779 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:47.779 LIB libspdk_bdev_iscsi.a 00:03:47.779 SO libspdk_bdev_iscsi.so.6.0 00:03:47.779 SYMLINK libspdk_bdev_iscsi.so 00:03:47.779 LIB libspdk_bdev_virtio.a 00:03:47.779 LIB libspdk_bdev_raid.a 00:03:48.037 SO libspdk_bdev_virtio.so.6.0 00:03:48.037 SO libspdk_bdev_raid.so.6.0 00:03:48.037 SYMLINK libspdk_bdev_virtio.so 00:03:48.037 SYMLINK libspdk_bdev_raid.so 00:03:48.971 LIB libspdk_bdev_nvme.a 00:03:48.971 SO libspdk_bdev_nvme.so.7.0 00:03:48.971 SYMLINK libspdk_bdev_nvme.so 00:03:49.539 CC module/event/subsystems/scheduler/scheduler.o 00:03:49.539 CC module/event/subsystems/vmd/vmd.o 00:03:49.539 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:49.539 CC module/event/subsystems/keyring/keyring.o 00:03:49.539 CC module/event/subsystems/iobuf/iobuf.o 00:03:49.539 CC module/event/subsystems/sock/sock.o 00:03:49.539 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:49.539 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:49.539 LIB libspdk_event_vhost_blk.a 00:03:49.798 LIB libspdk_event_scheduler.a 00:03:49.798 LIB libspdk_event_vmd.a 00:03:49.798 SO libspdk_event_vhost_blk.so.3.0 00:03:49.798 LIB libspdk_event_keyring.a 00:03:49.798 LIB libspdk_event_sock.a 00:03:49.798 LIB libspdk_event_iobuf.a 00:03:49.798 SO libspdk_event_scheduler.so.4.0 00:03:49.798 SO libspdk_event_keyring.so.1.0 00:03:49.798 SO libspdk_event_sock.so.5.0 00:03:49.798 SO libspdk_event_vmd.so.6.0 00:03:49.798 SO libspdk_event_iobuf.so.3.0 00:03:49.798 SYMLINK libspdk_event_vhost_blk.so 00:03:49.798 SYMLINK libspdk_event_scheduler.so 00:03:49.798 SYMLINK libspdk_event_keyring.so 00:03:49.798 SYMLINK libspdk_event_sock.so 00:03:49.798 SYMLINK libspdk_event_vmd.so 00:03:49.798 SYMLINK libspdk_event_iobuf.so 00:03:50.056 CC module/event/subsystems/accel/accel.o 00:03:50.314 LIB libspdk_event_accel.a 00:03:50.314 SO libspdk_event_accel.so.6.0 00:03:50.314 SYMLINK libspdk_event_accel.so 00:03:50.573 CC module/event/subsystems/bdev/bdev.o 00:03:50.832 LIB libspdk_event_bdev.a 00:03:50.832 SO libspdk_event_bdev.so.6.0 00:03:51.090 SYMLINK libspdk_event_bdev.so 00:03:51.090 CC module/event/subsystems/scsi/scsi.o 00:03:51.090 CC module/event/subsystems/nbd/nbd.o 00:03:51.090 CC module/event/subsystems/ublk/ublk.o 00:03:51.090 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.090 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.348 LIB libspdk_event_nbd.a 00:03:51.348 LIB libspdk_event_ublk.a 00:03:51.348 SO libspdk_event_nbd.so.6.0 00:03:51.348 LIB libspdk_event_scsi.a 00:03:51.348 SO libspdk_event_ublk.so.3.0 00:03:51.348 SO libspdk_event_scsi.so.6.0 00:03:51.348 SYMLINK libspdk_event_nbd.so 00:03:51.348 SYMLINK libspdk_event_ublk.so 00:03:51.606 LIB libspdk_event_nvmf.a 00:03:51.606 SYMLINK libspdk_event_scsi.so 00:03:51.606 SO libspdk_event_nvmf.so.6.0 00:03:51.606 SYMLINK libspdk_event_nvmf.so 00:03:51.864 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:51.864 CC module/event/subsystems/iscsi/iscsi.o 00:03:51.864 LIB libspdk_event_vhost_scsi.a 00:03:51.864 LIB libspdk_event_iscsi.a 00:03:51.864 SO libspdk_event_vhost_scsi.so.3.0 00:03:52.122 SO libspdk_event_iscsi.so.6.0 00:03:52.122 SYMLINK libspdk_event_vhost_scsi.so 00:03:52.122 SYMLINK libspdk_event_iscsi.so 00:03:52.381 SO libspdk.so.6.0 00:03:52.381 SYMLINK libspdk.so 00:03:52.381 CXX app/trace/trace.o 00:03:52.381 TEST_HEADER include/spdk/accel.h 00:03:52.381 TEST_HEADER include/spdk/accel_module.h 00:03:52.381 TEST_HEADER include/spdk/assert.h 00:03:52.381 CC app/trace_record/trace_record.o 00:03:52.639 TEST_HEADER include/spdk/barrier.h 00:03:52.639 TEST_HEADER include/spdk/base64.h 00:03:52.639 TEST_HEADER include/spdk/bdev.h 00:03:52.639 TEST_HEADER include/spdk/bdev_module.h 00:03:52.639 TEST_HEADER include/spdk/bdev_zone.h 00:03:52.639 TEST_HEADER include/spdk/bit_array.h 00:03:52.639 TEST_HEADER include/spdk/bit_pool.h 00:03:52.639 TEST_HEADER include/spdk/blob_bdev.h 00:03:52.639 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:52.639 TEST_HEADER include/spdk/blobfs.h 00:03:52.639 TEST_HEADER include/spdk/blob.h 00:03:52.639 TEST_HEADER include/spdk/conf.h 00:03:52.639 TEST_HEADER include/spdk/config.h 00:03:52.639 TEST_HEADER include/spdk/cpuset.h 00:03:52.639 CC app/nvmf_tgt/nvmf_main.o 00:03:52.639 CC app/iscsi_tgt/iscsi_tgt.o 00:03:52.639 TEST_HEADER include/spdk/crc16.h 00:03:52.639 TEST_HEADER include/spdk/crc32.h 00:03:52.639 TEST_HEADER include/spdk/crc64.h 00:03:52.639 TEST_HEADER include/spdk/dif.h 00:03:52.639 TEST_HEADER include/spdk/dma.h 00:03:52.639 TEST_HEADER include/spdk/endian.h 00:03:52.639 CC app/spdk_tgt/spdk_tgt.o 00:03:52.639 TEST_HEADER include/spdk/env_dpdk.h 00:03:52.639 TEST_HEADER include/spdk/env.h 00:03:52.639 TEST_HEADER include/spdk/event.h 00:03:52.639 TEST_HEADER include/spdk/fd_group.h 00:03:52.639 TEST_HEADER include/spdk/fd.h 00:03:52.639 TEST_HEADER include/spdk/file.h 00:03:52.639 CC examples/util/zipf/zipf.o 00:03:52.639 TEST_HEADER include/spdk/ftl.h 00:03:52.639 TEST_HEADER include/spdk/gpt_spec.h 00:03:52.639 TEST_HEADER include/spdk/hexlify.h 00:03:52.639 TEST_HEADER include/spdk/histogram_data.h 00:03:52.639 TEST_HEADER include/spdk/idxd.h 00:03:52.639 TEST_HEADER include/spdk/idxd_spec.h 00:03:52.639 TEST_HEADER include/spdk/init.h 00:03:52.639 CC test/thread/poller_perf/poller_perf.o 00:03:52.639 TEST_HEADER include/spdk/ioat.h 00:03:52.639 TEST_HEADER include/spdk/ioat_spec.h 00:03:52.639 TEST_HEADER include/spdk/iscsi_spec.h 00:03:52.639 TEST_HEADER include/spdk/json.h 00:03:52.639 TEST_HEADER include/spdk/jsonrpc.h 00:03:52.639 TEST_HEADER include/spdk/keyring.h 00:03:52.639 TEST_HEADER include/spdk/keyring_module.h 00:03:52.639 TEST_HEADER include/spdk/likely.h 00:03:52.639 TEST_HEADER include/spdk/log.h 00:03:52.639 TEST_HEADER include/spdk/lvol.h 00:03:52.639 TEST_HEADER include/spdk/memory.h 00:03:52.639 CC test/app/bdev_svc/bdev_svc.o 00:03:52.639 TEST_HEADER include/spdk/mmio.h 00:03:52.639 TEST_HEADER include/spdk/nbd.h 00:03:52.639 CC test/dma/test_dma/test_dma.o 00:03:52.639 TEST_HEADER include/spdk/net.h 00:03:52.639 TEST_HEADER include/spdk/notify.h 00:03:52.639 TEST_HEADER include/spdk/nvme.h 00:03:52.639 TEST_HEADER include/spdk/nvme_intel.h 00:03:52.639 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:52.639 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:52.639 TEST_HEADER include/spdk/nvme_spec.h 00:03:52.639 TEST_HEADER include/spdk/nvme_zns.h 00:03:52.639 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:52.639 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:52.639 TEST_HEADER include/spdk/nvmf.h 00:03:52.639 TEST_HEADER include/spdk/nvmf_spec.h 00:03:52.639 TEST_HEADER include/spdk/nvmf_transport.h 00:03:52.639 TEST_HEADER include/spdk/opal.h 00:03:52.639 TEST_HEADER include/spdk/opal_spec.h 00:03:52.639 TEST_HEADER include/spdk/pci_ids.h 00:03:52.639 TEST_HEADER include/spdk/pipe.h 00:03:52.639 TEST_HEADER include/spdk/queue.h 00:03:52.639 TEST_HEADER include/spdk/reduce.h 00:03:52.639 TEST_HEADER include/spdk/rpc.h 00:03:52.639 TEST_HEADER include/spdk/scheduler.h 00:03:52.639 TEST_HEADER include/spdk/scsi.h 00:03:52.639 TEST_HEADER include/spdk/scsi_spec.h 00:03:52.639 TEST_HEADER include/spdk/sock.h 00:03:52.639 TEST_HEADER include/spdk/stdinc.h 00:03:52.639 TEST_HEADER include/spdk/string.h 00:03:52.639 TEST_HEADER include/spdk/thread.h 00:03:52.639 TEST_HEADER include/spdk/trace.h 00:03:52.639 TEST_HEADER include/spdk/trace_parser.h 00:03:52.639 TEST_HEADER include/spdk/tree.h 00:03:52.639 TEST_HEADER include/spdk/ublk.h 00:03:52.639 TEST_HEADER include/spdk/util.h 00:03:52.639 TEST_HEADER include/spdk/uuid.h 00:03:52.639 TEST_HEADER include/spdk/version.h 00:03:52.639 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:52.639 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:52.898 TEST_HEADER include/spdk/vhost.h 00:03:52.898 TEST_HEADER include/spdk/vmd.h 00:03:52.898 TEST_HEADER include/spdk/xor.h 00:03:52.898 TEST_HEADER include/spdk/zipf.h 00:03:52.898 CXX test/cpp_headers/accel.o 00:03:52.898 LINK nvmf_tgt 00:03:52.898 LINK zipf 00:03:52.898 LINK poller_perf 00:03:52.898 LINK spdk_trace_record 00:03:52.898 LINK iscsi_tgt 00:03:52.898 LINK bdev_svc 00:03:52.898 LINK spdk_tgt 00:03:52.898 LINK spdk_trace 00:03:52.898 CXX test/cpp_headers/accel_module.o 00:03:53.156 LINK test_dma 00:03:53.156 CXX test/cpp_headers/assert.o 00:03:53.156 CC examples/vmd/lsvmd/lsvmd.o 00:03:53.156 CC examples/ioat/perf/perf.o 00:03:53.156 CC examples/idxd/perf/perf.o 00:03:53.414 CC examples/ioat/verify/verify.o 00:03:53.414 CXX test/cpp_headers/barrier.o 00:03:53.414 LINK lsvmd 00:03:53.414 CC app/spdk_lspci/spdk_lspci.o 00:03:53.414 CC test/app/histogram_perf/histogram_perf.o 00:03:53.414 LINK ioat_perf 00:03:53.414 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:53.673 CC test/app/jsoncat/jsoncat.o 00:03:53.673 CXX test/cpp_headers/base64.o 00:03:53.673 LINK spdk_lspci 00:03:53.673 LINK histogram_perf 00:03:53.673 LINK verify 00:03:53.673 LINK idxd_perf 00:03:53.673 CC examples/vmd/led/led.o 00:03:53.673 LINK jsoncat 00:03:53.673 CXX test/cpp_headers/bdev.o 00:03:53.673 CC test/app/stub/stub.o 00:03:53.931 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:53.931 LINK led 00:03:53.931 CC app/spdk_nvme_perf/perf.o 00:03:53.931 LINK nvme_fuzz 00:03:53.931 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:53.931 CXX test/cpp_headers/bdev_module.o 00:03:53.931 LINK stub 00:03:53.931 CC app/spdk_nvme_identify/identify.o 00:03:53.931 CC app/spdk_nvme_discover/discovery_aer.o 00:03:54.190 CXX test/cpp_headers/bdev_zone.o 00:03:54.190 LINK interrupt_tgt 00:03:54.190 CXX test/cpp_headers/bit_array.o 00:03:54.190 CXX test/cpp_headers/bit_pool.o 00:03:54.190 LINK spdk_nvme_discover 00:03:54.448 CXX test/cpp_headers/blob_bdev.o 00:03:54.448 CXX test/cpp_headers/blobfs_bdev.o 00:03:54.448 CC examples/thread/thread/thread_ex.o 00:03:54.706 CC test/env/vtophys/vtophys.o 00:03:54.706 CC test/event/event_perf/event_perf.o 00:03:54.706 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.706 CXX test/cpp_headers/blobfs.o 00:03:54.706 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:54.706 LINK vtophys 00:03:54.706 LINK spdk_nvme_perf 00:03:54.965 LINK thread 00:03:54.965 CXX test/cpp_headers/blob.o 00:03:54.965 LINK event_perf 00:03:54.965 LINK env_dpdk_post_init 00:03:55.223 CXX test/cpp_headers/conf.o 00:03:55.223 LINK spdk_nvme_identify 00:03:55.223 CC test/event/reactor/reactor.o 00:03:55.223 CC test/event/reactor_perf/reactor_perf.o 00:03:55.481 CC test/env/memory/memory_ut.o 00:03:55.481 CXX test/cpp_headers/config.o 00:03:55.481 CC test/env/pci/pci_ut.o 00:03:55.481 CXX test/cpp_headers/cpuset.o 00:03:55.481 LINK reactor_perf 00:03:55.481 LINK reactor 00:03:55.481 CC test/nvme/aer/aer.o 00:03:55.481 CC app/spdk_top/spdk_top.o 00:03:55.738 CXX test/cpp_headers/crc16.o 00:03:55.738 CC test/event/app_repeat/app_repeat.o 00:03:55.738 LINK mem_callbacks 00:03:55.738 LINK aer 00:03:55.738 CC test/event/scheduler/scheduler.o 00:03:55.738 LINK iscsi_fuzz 00:03:55.738 CXX test/cpp_headers/crc32.o 00:03:55.995 LINK pci_ut 00:03:55.995 CXX test/cpp_headers/crc64.o 00:03:55.995 LINK app_repeat 00:03:55.995 LINK scheduler 00:03:56.251 CC test/nvme/reset/reset.o 00:03:56.251 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:56.251 CXX test/cpp_headers/dif.o 00:03:56.251 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:56.506 CXX test/cpp_headers/dma.o 00:03:56.506 CC examples/accel/perf/accel_perf.o 00:03:56.506 CC examples/sock/hello_world/hello_sock.o 00:03:56.506 CXX test/cpp_headers/endian.o 00:03:56.506 LINK reset 00:03:56.762 CC examples/blob/hello_world/hello_blob.o 00:03:56.762 CC examples/blob/cli/blobcli.o 00:03:56.762 LINK vhost_fuzz 00:03:56.762 LINK memory_ut 00:03:56.762 LINK spdk_top 00:03:56.762 LINK hello_sock 00:03:57.019 CXX test/cpp_headers/env_dpdk.o 00:03:57.019 LINK hello_blob 00:03:57.019 CXX test/cpp_headers/env.o 00:03:57.276 CXX test/cpp_headers/event.o 00:03:57.276 CC test/nvme/sgl/sgl.o 00:03:57.276 LINK blobcli 00:03:57.276 CC test/nvme/e2edp/nvme_dp.o 00:03:57.276 LINK accel_perf 00:03:57.535 CXX test/cpp_headers/fd_group.o 00:03:57.535 CC app/vhost/vhost.o 00:03:57.535 CC test/nvme/overhead/overhead.o 00:03:57.535 CC test/rpc_client/rpc_client_test.o 00:03:57.535 CXX test/cpp_headers/fd.o 00:03:57.535 LINK nvme_dp 00:03:57.791 CC examples/nvme/hello_world/hello_world.o 00:03:57.791 CC app/spdk_dd/spdk_dd.o 00:03:57.791 LINK vhost 00:03:57.791 LINK sgl 00:03:57.791 LINK rpc_client_test 00:03:57.791 CC examples/bdev/hello_world/hello_bdev.o 00:03:57.791 LINK overhead 00:03:58.047 CXX test/cpp_headers/file.o 00:03:58.047 LINK hello_world 00:03:58.047 CXX test/cpp_headers/ftl.o 00:03:58.047 CXX test/cpp_headers/gpt_spec.o 00:03:58.047 CC examples/bdev/bdevperf/bdevperf.o 00:03:58.047 CXX test/cpp_headers/hexlify.o 00:03:58.047 LINK hello_bdev 00:03:58.303 LINK spdk_dd 00:03:58.303 CXX test/cpp_headers/histogram_data.o 00:03:58.303 CC test/nvme/err_injection/err_injection.o 00:03:58.303 CC examples/nvme/reconnect/reconnect.o 00:03:58.303 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:58.303 CC examples/nvme/hotplug/hotplug.o 00:03:58.303 CC examples/nvme/arbitration/arbitration.o 00:03:58.559 CXX test/cpp_headers/idxd.o 00:03:58.559 LINK err_injection 00:03:58.559 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:58.814 CXX test/cpp_headers/idxd_spec.o 00:03:58.815 LINK hotplug 00:03:58.815 CC app/fio/nvme/fio_plugin.o 00:03:59.071 LINK reconnect 00:03:59.071 LINK cmb_copy 00:03:59.071 LINK arbitration 00:03:59.071 CC test/nvme/startup/startup.o 00:03:59.071 LINK nvme_manage 00:03:59.071 CXX test/cpp_headers/init.o 00:03:59.329 LINK bdevperf 00:03:59.329 CXX test/cpp_headers/ioat.o 00:03:59.329 LINK startup 00:03:59.329 CC examples/nvme/abort/abort.o 00:03:59.329 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:59.329 CC app/fio/bdev/fio_plugin.o 00:03:59.587 CXX test/cpp_headers/ioat_spec.o 00:03:59.587 CC test/nvme/reserve/reserve.o 00:03:59.587 LINK pmr_persistence 00:03:59.587 CC test/accel/dif/dif.o 00:03:59.845 CXX test/cpp_headers/iscsi_spec.o 00:03:59.845 CC test/blobfs/mkfs/mkfs.o 00:03:59.845 LINK spdk_nvme 00:03:59.845 CXX test/cpp_headers/json.o 00:03:59.845 LINK reserve 00:03:59.845 CXX test/cpp_headers/jsonrpc.o 00:04:00.103 CC test/lvol/esnap/esnap.o 00:04:00.103 LINK spdk_bdev 00:04:00.103 CXX test/cpp_headers/keyring.o 00:04:00.103 LINK mkfs 00:04:00.103 CC test/nvme/simple_copy/simple_copy.o 00:04:00.103 CXX test/cpp_headers/keyring_module.o 00:04:00.103 LINK abort 00:04:00.103 CXX test/cpp_headers/likely.o 00:04:00.103 CXX test/cpp_headers/log.o 00:04:00.103 LINK dif 00:04:00.362 CXX test/cpp_headers/lvol.o 00:04:00.362 CXX test/cpp_headers/memory.o 00:04:00.362 CXX test/cpp_headers/mmio.o 00:04:00.362 CXX test/cpp_headers/nbd.o 00:04:00.362 CXX test/cpp_headers/net.o 00:04:00.619 LINK simple_copy 00:04:00.619 CXX test/cpp_headers/notify.o 00:04:00.619 CXX test/cpp_headers/nvme.o 00:04:00.619 CC test/nvme/connect_stress/connect_stress.o 00:04:00.619 CXX test/cpp_headers/nvme_intel.o 00:04:00.619 CXX test/cpp_headers/nvme_ocssd.o 00:04:00.619 CC examples/nvmf/nvmf/nvmf.o 00:04:00.878 LINK connect_stress 00:04:00.878 CC test/nvme/boot_partition/boot_partition.o 00:04:00.878 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:00.878 CC test/bdev/bdevio/bdevio.o 00:04:00.878 CC test/nvme/fused_ordering/fused_ordering.o 00:04:00.878 CC test/nvme/compliance/nvme_compliance.o 00:04:00.878 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:01.136 LINK boot_partition 00:04:01.136 CC test/nvme/fdp/fdp.o 00:04:01.136 LINK nvmf 00:04:01.136 LINK doorbell_aers 00:04:01.136 CXX test/cpp_headers/nvme_spec.o 00:04:01.393 LINK fused_ordering 00:04:01.393 LINK nvme_compliance 00:04:01.393 LINK bdevio 00:04:01.393 CC test/nvme/cuse/cuse.o 00:04:01.393 CXX test/cpp_headers/nvme_zns.o 00:04:01.393 CXX test/cpp_headers/nvmf_cmd.o 00:04:01.393 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:01.393 CXX test/cpp_headers/nvmf.o 00:04:01.651 CXX test/cpp_headers/nvmf_spec.o 00:04:01.651 CXX test/cpp_headers/nvmf_transport.o 00:04:01.651 CXX test/cpp_headers/opal.o 00:04:01.651 CXX test/cpp_headers/opal_spec.o 00:04:01.651 CXX test/cpp_headers/pci_ids.o 00:04:01.909 CXX test/cpp_headers/pipe.o 00:04:01.909 LINK fdp 00:04:01.909 CXX test/cpp_headers/queue.o 00:04:01.909 CXX test/cpp_headers/reduce.o 00:04:01.909 CXX test/cpp_headers/rpc.o 00:04:01.909 CXX test/cpp_headers/scheduler.o 00:04:01.909 CXX test/cpp_headers/scsi.o 00:04:02.167 CXX test/cpp_headers/scsi_spec.o 00:04:02.167 CXX test/cpp_headers/sock.o 00:04:02.167 CXX test/cpp_headers/stdinc.o 00:04:02.167 CXX test/cpp_headers/string.o 00:04:02.167 CXX test/cpp_headers/thread.o 00:04:02.167 CXX test/cpp_headers/trace.o 00:04:02.167 CXX test/cpp_headers/trace_parser.o 00:04:02.167 CXX test/cpp_headers/tree.o 00:04:02.167 CXX test/cpp_headers/ublk.o 00:04:02.167 CXX test/cpp_headers/util.o 00:04:02.441 CXX test/cpp_headers/uuid.o 00:04:02.441 CXX test/cpp_headers/version.o 00:04:02.441 CXX test/cpp_headers/vfio_user_pci.o 00:04:02.441 CXX test/cpp_headers/vfio_user_spec.o 00:04:02.441 CXX test/cpp_headers/vhost.o 00:04:02.441 CXX test/cpp_headers/vmd.o 00:04:02.441 CXX test/cpp_headers/xor.o 00:04:02.441 CXX test/cpp_headers/zipf.o 00:04:03.006 LINK cuse 00:04:05.560 LINK esnap 00:04:05.560 00:04:05.560 real 1m10.491s 00:04:05.560 user 7m14.911s 00:04:05.560 sys 1m42.053s 00:04:05.560 19:32:31 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:05.560 ************************************ 00:04:05.560 END TEST make 00:04:05.560 ************************************ 00:04:05.560 19:32:31 make -- common/autotest_common.sh@10 -- $ set +x 00:04:05.560 19:32:31 -- common/autotest_common.sh@1142 -- $ return 0 00:04:05.560 19:32:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:05.560 19:32:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:05.560 19:32:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:05.560 19:32:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.560 19:32:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:05.560 19:32:31 -- pm/common@44 -- $ pid=5185 00:04:05.560 19:32:31 -- pm/common@50 -- $ kill -TERM 5185 00:04:05.560 19:32:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.560 19:32:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:05.560 19:32:31 -- pm/common@44 -- $ pid=5187 00:04:05.560 19:32:31 -- pm/common@50 -- $ kill -TERM 5187 00:04:05.818 19:32:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:05.818 19:32:31 -- nvmf/common.sh@7 -- # uname -s 00:04:05.818 19:32:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.818 19:32:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.818 19:32:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.818 19:32:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.818 19:32:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.818 19:32:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.818 19:32:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.818 19:32:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.818 19:32:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.818 19:32:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.818 19:32:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:04:05.818 19:32:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:04:05.818 19:32:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.818 19:32:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.818 19:32:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:05.818 19:32:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.818 19:32:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:05.818 19:32:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.818 19:32:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.818 19:32:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.818 19:32:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.818 19:32:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.818 19:32:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.818 19:32:31 -- paths/export.sh@5 -- # export PATH 00:04:05.818 19:32:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.818 19:32:31 -- nvmf/common.sh@47 -- # : 0 00:04:05.818 19:32:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:05.818 19:32:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:05.818 19:32:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.818 19:32:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.818 19:32:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.818 19:32:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:05.818 19:32:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:05.818 19:32:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:05.818 19:32:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.818 19:32:31 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.818 19:32:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.818 19:32:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.818 19:32:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.818 19:32:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.818 19:32:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.818 19:32:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.818 19:32:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.818 19:32:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.818 19:32:31 -- spdk/autotest.sh@48 -- # udevadm_pid=54610 00:04:05.818 19:32:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:05.818 19:32:31 -- pm/common@17 -- # local monitor 00:04:05.818 19:32:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.818 19:32:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.818 19:32:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.818 19:32:31 -- pm/common@25 -- # sleep 1 00:04:05.818 19:32:31 -- pm/common@21 -- # date +%s 00:04:05.818 19:32:31 -- pm/common@21 -- # date +%s 00:04:05.818 19:32:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721071951 00:04:05.818 19:32:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721071951 00:04:05.818 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721071951_collect-vmstat.pm.log 00:04:05.818 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721071951_collect-cpu-load.pm.log 00:04:06.753 19:32:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.753 19:32:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:06.753 19:32:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:06.753 19:32:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.753 19:32:32 -- spdk/autotest.sh@59 -- # create_test_list 00:04:06.753 19:32:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:06.753 19:32:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.011 19:32:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:07.011 19:32:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:07.011 19:32:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:07.011 19:32:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:07.011 19:32:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:07.011 19:32:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:07.011 19:32:32 -- common/autotest_common.sh@1455 -- # uname 00:04:07.011 19:32:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:07.011 19:32:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:07.011 19:32:32 -- common/autotest_common.sh@1475 -- # uname 00:04:07.011 19:32:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:07.011 19:32:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:07.011 19:32:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:07.011 19:32:32 -- spdk/autotest.sh@72 -- # hash lcov 00:04:07.011 19:32:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:07.011 19:32:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:07.011 --rc lcov_branch_coverage=1 00:04:07.011 --rc lcov_function_coverage=1 00:04:07.011 --rc genhtml_branch_coverage=1 00:04:07.011 --rc genhtml_function_coverage=1 00:04:07.011 --rc genhtml_legend=1 00:04:07.011 --rc geninfo_all_blocks=1 00:04:07.011 ' 00:04:07.011 19:32:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:07.011 --rc lcov_branch_coverage=1 00:04:07.011 --rc lcov_function_coverage=1 00:04:07.011 --rc genhtml_branch_coverage=1 00:04:07.011 --rc genhtml_function_coverage=1 00:04:07.011 --rc genhtml_legend=1 00:04:07.011 --rc geninfo_all_blocks=1 00:04:07.011 ' 00:04:07.011 19:32:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:07.011 --rc lcov_branch_coverage=1 00:04:07.011 --rc lcov_function_coverage=1 00:04:07.011 --rc genhtml_branch_coverage=1 00:04:07.011 --rc genhtml_function_coverage=1 00:04:07.011 --rc genhtml_legend=1 00:04:07.011 --rc geninfo_all_blocks=1 00:04:07.011 --no-external' 00:04:07.011 19:32:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:07.011 --rc lcov_branch_coverage=1 00:04:07.011 --rc lcov_function_coverage=1 00:04:07.011 --rc genhtml_branch_coverage=1 00:04:07.011 --rc genhtml_function_coverage=1 00:04:07.011 --rc genhtml_legend=1 00:04:07.011 --rc geninfo_all_blocks=1 00:04:07.011 --no-external' 00:04:07.011 19:32:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:07.011 lcov: LCOV version 1.14 00:04:07.011 19:32:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:21.871 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:21.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:36.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:36.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:36.801 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:36.801 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:39.322 19:33:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:39.322 19:33:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.322 19:33:04 -- common/autotest_common.sh@10 -- # set +x 00:04:39.322 19:33:04 -- spdk/autotest.sh@91 -- # rm -f 00:04:39.322 19:33:04 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.837 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:39.837 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:39.837 19:33:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:39.837 19:33:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.837 19:33:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.837 19:33:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.837 19:33:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.837 19:33:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.837 19:33:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.837 19:33:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.837 19:33:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.837 19:33:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.837 19:33:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:39.837 19:33:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:39.837 19:33:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.837 19:33:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.837 19:33:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.837 19:33:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:39.837 19:33:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:39.837 19:33:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:39.837 19:33:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.837 19:33:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.837 19:33:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:39.837 19:33:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:39.837 19:33:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:39.837 19:33:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.837 19:33:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:39.837 19:33:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.837 19:33:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:39.837 19:33:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:39.837 19:33:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:39.837 19:33:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.837 No valid GPT data, bailing 00:04:39.837 19:33:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.837 19:33:05 -- scripts/common.sh@391 -- # pt= 00:04:39.837 19:33:05 -- scripts/common.sh@392 -- # return 1 00:04:39.837 19:33:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.837 1+0 records in 00:04:39.837 1+0 records out 00:04:39.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00346829 s, 302 MB/s 00:04:39.837 19:33:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.837 19:33:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:39.837 19:33:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:39.837 19:33:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:39.838 19:33:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:39.838 No valid GPT data, bailing 00:04:39.838 19:33:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:39.838 19:33:05 -- scripts/common.sh@391 -- # pt= 00:04:39.838 19:33:05 -- scripts/common.sh@392 -- # return 1 00:04:39.838 19:33:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:39.838 1+0 records in 00:04:39.838 1+0 records out 00:04:39.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402408 s, 261 MB/s 00:04:39.838 19:33:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.838 19:33:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:39.838 19:33:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:39.838 19:33:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:39.838 19:33:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:40.095 No valid GPT data, bailing 00:04:40.095 19:33:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:40.095 19:33:05 -- scripts/common.sh@391 -- # pt= 00:04:40.095 19:33:05 -- scripts/common.sh@392 -- # return 1 00:04:40.095 19:33:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:40.095 1+0 records in 00:04:40.095 1+0 records out 00:04:40.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00400819 s, 262 MB/s 00:04:40.095 19:33:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:40.095 19:33:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:40.095 19:33:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:40.095 19:33:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:40.095 19:33:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:40.095 No valid GPT data, bailing 00:04:40.095 19:33:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:40.095 19:33:05 -- scripts/common.sh@391 -- # pt= 00:04:40.095 19:33:05 -- scripts/common.sh@392 -- # return 1 00:04:40.095 19:33:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:40.095 1+0 records in 00:04:40.095 1+0 records out 00:04:40.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00348713 s, 301 MB/s 00:04:40.095 19:33:05 -- spdk/autotest.sh@118 -- # sync 00:04:40.095 19:33:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:40.095 19:33:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:40.095 19:33:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:42.000 19:33:07 -- spdk/autotest.sh@124 -- # uname -s 00:04:42.000 19:33:07 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:42.000 19:33:07 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:42.000 19:33:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.000 19:33:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.000 19:33:07 -- common/autotest_common.sh@10 -- # set +x 00:04:42.000 ************************************ 00:04:42.000 START TEST setup.sh 00:04:42.000 ************************************ 00:04:42.000 19:33:07 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:42.000 * Looking for test storage... 00:04:42.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.000 19:33:07 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:42.000 19:33:07 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:42.000 19:33:07 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:42.000 19:33:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.000 19:33:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.000 19:33:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.000 ************************************ 00:04:42.000 START TEST acl 00:04:42.000 ************************************ 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:42.000 * Looking for test storage... 00:04:42.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.000 19:33:07 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:42.000 19:33:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:42.000 19:33:07 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:42.000 19:33:07 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:42.000 19:33:07 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:42.000 19:33:07 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:42.000 19:33:07 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:42.000 19:33:07 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.000 19:33:07 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.567 19:33:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:42.567 19:33:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:42.567 19:33:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.567 19:33:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:42.567 19:33:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.567 19:33:08 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.132 Hugepages 00:04:43.132 node hugesize free / total 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.132 00:04:43.132 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.132 19:33:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:43.410 19:33:08 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:43.411 19:33:08 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.411 19:33:08 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.411 19:33:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.411 ************************************ 00:04:43.411 START TEST denied 00:04:43.411 ************************************ 00:04:43.411 19:33:08 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:43.411 19:33:08 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:43.411 19:33:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:43.411 19:33:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:43.411 19:33:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.411 19:33:08 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.992 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.992 19:33:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.558 00:04:44.558 real 0m1.321s 00:04:44.558 user 0m0.527s 00:04:44.558 sys 0m0.727s 00:04:44.558 19:33:10 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.558 19:33:10 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:44.558 ************************************ 00:04:44.558 END TEST denied 00:04:44.558 ************************************ 00:04:44.558 19:33:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:44.558 19:33:10 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:44.558 19:33:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.558 19:33:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.558 19:33:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:44.558 ************************************ 00:04:44.558 START TEST allowed 00:04:44.558 ************************************ 00:04:44.558 19:33:10 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:44.558 19:33:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:44.558 19:33:10 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:44.558 19:33:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:44.558 19:33:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.558 19:33:10 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.514 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.514 19:33:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:45.514 19:33:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:45.514 19:33:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:45.514 19:33:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:45.514 19:33:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:45.514 19:33:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:45.514 19:33:11 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:45.514 19:33:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:45.514 19:33:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.514 19:33:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.080 00:04:46.080 real 0m1.314s 00:04:46.080 user 0m0.601s 00:04:46.080 sys 0m0.713s 00:04:46.080 19:33:11 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.080 19:33:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:46.080 ************************************ 00:04:46.080 END TEST allowed 00:04:46.080 ************************************ 00:04:46.080 19:33:11 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:46.080 00:04:46.080 real 0m4.239s 00:04:46.080 user 0m1.848s 00:04:46.080 sys 0m2.328s 00:04:46.080 19:33:11 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.080 19:33:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:46.080 ************************************ 00:04:46.080 END TEST acl 00:04:46.080 ************************************ 00:04:46.080 19:33:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:46.080 19:33:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:46.080 19:33:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.080 19:33:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.080 19:33:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.080 ************************************ 00:04:46.080 START TEST hugepages 00:04:46.080 ************************************ 00:04:46.080 19:33:11 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:46.080 * Looking for test storage... 00:04:46.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 5879828 kB' 'MemAvailable: 7391196 kB' 'Buffers: 2436 kB' 'Cached: 1722976 kB' 'SwapCached: 0 kB' 'Active: 476724 kB' 'Inactive: 1352724 kB' 'Active(anon): 114524 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 105708 kB' 'Mapped: 48480 kB' 'Shmem: 10488 kB' 'KReclaimable: 67216 kB' 'Slab: 140452 kB' 'SReclaimable: 67216 kB' 'SUnreclaim: 73236 kB' 'KernelStack: 6332 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 333220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.080 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.081 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.082 19:33:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:46.082 19:33:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.082 19:33:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.082 19:33:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.082 ************************************ 00:04:46.082 START TEST default_setup 00:04:46.082 ************************************ 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.082 19:33:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.910 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.910 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.910 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7914096 kB' 'MemAvailable: 9425240 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493740 kB' 'Inactive: 1352732 kB' 'Active(anon): 131540 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 122820 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 139968 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6352 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.911 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7914096 kB' 'MemAvailable: 9425240 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493472 kB' 'Inactive: 1352732 kB' 'Active(anon): 131272 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 122540 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 139968 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6320 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.912 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.913 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7914096 kB' 'MemAvailable: 9425240 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493472 kB' 'Inactive: 1352732 kB' 'Active(anon): 131272 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 122280 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 139968 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6320 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.914 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.915 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:46.916 nr_hugepages=1024 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.916 resv_hugepages=0 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.916 surplus_hugepages=0 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.916 anon_hugepages=0 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7914752 kB' 'MemAvailable: 9425896 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493392 kB' 'Inactive: 1352732 kB' 'Active(anon): 131192 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352732 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 122592 kB' 'Mapped: 48484 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 139968 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6304 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.916 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.917 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.918 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7915188 kB' 'MemUsed: 4326776 kB' 'SwapCached: 0 kB' 'Active: 493704 kB' 'Inactive: 1352740 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1725404 kB' 'Mapped: 48744 kB' 'AnonPages: 122636 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66748 kB' 'Slab: 139960 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.919 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.178 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.179 node0=1024 expecting 1024 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.179 00:04:47.179 real 0m0.907s 00:04:47.179 user 0m0.453s 00:04:47.179 sys 0m0.396s 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.179 19:33:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:47.179 ************************************ 00:04:47.179 END TEST default_setup 00:04:47.179 ************************************ 00:04:47.179 19:33:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.179 19:33:12 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:47.179 19:33:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.179 19:33:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.179 19:33:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.179 ************************************ 00:04:47.179 START TEST per_node_1G_alloc 00:04:47.179 ************************************ 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.179 19:33:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.444 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.444 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.444 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8960516 kB' 'MemAvailable: 10471664 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 494132 kB' 'Inactive: 1352740 kB' 'Active(anon): 131932 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139952 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73208 kB' 'KernelStack: 6308 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.445 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8960516 kB' 'MemAvailable: 10471664 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493532 kB' 'Inactive: 1352740 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73224 kB' 'KernelStack: 6304 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.446 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.447 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8960516 kB' 'MemAvailable: 10471664 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493308 kB' 'Inactive: 1352740 kB' 'Active(anon): 131108 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122268 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73224 kB' 'KernelStack: 6320 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.448 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.449 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.450 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.451 nr_hugepages=512 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:47.451 resv_hugepages=0 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.451 surplus_hugepages=0 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.451 anon_hugepages=0 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8960516 kB' 'MemAvailable: 10471664 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493568 kB' 'Inactive: 1352740 kB' 'Active(anon): 131368 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122528 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139968 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73224 kB' 'KernelStack: 6320 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.451 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.452 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8961160 kB' 'MemUsed: 3280804 kB' 'SwapCached: 0 kB' 'Active: 493548 kB' 'Inactive: 1352740 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1725404 kB' 'Mapped: 48480 kB' 'AnonPages: 122204 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 139948 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.453 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.454 node0=512 expecting 512 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.454 00:04:47.454 real 0m0.476s 00:04:47.454 user 0m0.259s 00:04:47.454 sys 0m0.248s 00:04:47.454 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.710 19:33:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.710 ************************************ 00:04:47.710 END TEST per_node_1G_alloc 00:04:47.710 ************************************ 00:04:47.710 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.710 19:33:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:47.710 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.710 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.710 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.710 ************************************ 00:04:47.710 START TEST even_2G_alloc 00:04:47.710 ************************************ 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.710 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.973 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.973 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7923992 kB' 'MemAvailable: 9435140 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493620 kB' 'Inactive: 1352740 kB' 'Active(anon): 131420 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122496 kB' 'Mapped: 48572 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139932 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73188 kB' 'KernelStack: 6244 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.973 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7924376 kB' 'MemAvailable: 9435524 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493436 kB' 'Inactive: 1352740 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122608 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139932 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73188 kB' 'KernelStack: 6272 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.974 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.975 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7924376 kB' 'MemAvailable: 9435524 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493304 kB' 'Inactive: 1352740 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122508 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139932 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73188 kB' 'KernelStack: 6320 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.976 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.977 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.978 nr_hugepages=1024 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.978 resv_hugepages=0 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.978 surplus_hugepages=0 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.978 anon_hugepages=0 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.978 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7924376 kB' 'MemAvailable: 9435524 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493408 kB' 'Inactive: 1352740 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122312 kB' 'Mapped: 48540 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139932 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73188 kB' 'KernelStack: 6336 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.979 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.980 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7924376 kB' 'MemUsed: 4317588 kB' 'SwapCached: 0 kB' 'Active: 493384 kB' 'Inactive: 1352740 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1725404 kB' 'Mapped: 48480 kB' 'AnonPages: 122312 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 139924 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.239 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.240 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.241 node0=1024 expecting 1024 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.241 00:04:48.241 real 0m0.524s 00:04:48.241 user 0m0.273s 00:04:48.241 sys 0m0.281s 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.241 19:33:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.241 ************************************ 00:04:48.241 END TEST even_2G_alloc 00:04:48.241 ************************************ 00:04:48.241 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:48.241 19:33:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:48.241 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.241 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.241 19:33:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.241 ************************************ 00:04:48.241 START TEST odd_alloc 00:04:48.241 ************************************ 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.241 19:33:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.531 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.531 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.531 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7917880 kB' 'MemAvailable: 9429028 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493884 kB' 'Inactive: 1352740 kB' 'Active(anon): 131684 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123012 kB' 'Mapped: 48516 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139920 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73176 kB' 'KernelStack: 6340 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.532 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7918052 kB' 'MemAvailable: 9429200 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493324 kB' 'Inactive: 1352740 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122532 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139932 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73188 kB' 'KernelStack: 6320 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.533 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.534 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7918052 kB' 'MemAvailable: 9429200 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493296 kB' 'Inactive: 1352740 kB' 'Active(anon): 131096 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122236 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139932 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73188 kB' 'KernelStack: 6304 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.535 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.536 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.537 nr_hugepages=1025 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:48.537 resv_hugepages=0 00:04:48.537 surplus_hugepages=0 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.537 anon_hugepages=0 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7920276 kB' 'MemAvailable: 9431424 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493564 kB' 'Inactive: 1352740 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122628 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139940 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73196 kB' 'KernelStack: 6336 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 351124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.537 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.538 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7934040 kB' 'MemUsed: 4307924 kB' 'SwapCached: 0 kB' 'Active: 493644 kB' 'Inactive: 1352740 kB' 'Active(anon): 131444 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1725404 kB' 'Mapped: 48480 kB' 'AnonPages: 122624 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 139936 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.539 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.540 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.836 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.837 node0=1025 expecting 1025 00:04:48.837 ************************************ 00:04:48.837 END TEST odd_alloc 00:04:48.837 ************************************ 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:48.837 00:04:48.837 real 0m0.483s 00:04:48.837 user 0m0.251s 00:04:48.837 sys 0m0.246s 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.837 19:33:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.837 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:48.837 19:33:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:48.837 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.837 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.837 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.837 ************************************ 00:04:48.837 START TEST custom_alloc 00:04:48.837 ************************************ 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:48.837 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.838 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.101 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.101 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.101 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8983868 kB' 'MemAvailable: 10495016 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493592 kB' 'Inactive: 1352740 kB' 'Active(anon): 131392 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122540 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139956 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73212 kB' 'KernelStack: 6324 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.102 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8983868 kB' 'MemAvailable: 10495016 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493708 kB' 'Inactive: 1352740 kB' 'Active(anon): 131508 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122656 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6308 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.103 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.104 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8984388 kB' 'MemAvailable: 10495540 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493380 kB' 'Inactive: 1352744 kB' 'Active(anon): 131180 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122640 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6336 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.105 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.106 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:49.107 nr_hugepages=512 00:04:49.107 resv_hugepages=0 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.107 surplus_hugepages=0 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.107 anon_hugepages=0 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.107 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8984540 kB' 'MemAvailable: 10495692 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493256 kB' 'Inactive: 1352744 kB' 'Active(anon): 131056 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122516 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139960 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73216 kB' 'KernelStack: 6304 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.108 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.109 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8984540 kB' 'MemUsed: 3257424 kB' 'SwapCached: 0 kB' 'Active: 493280 kB' 'Inactive: 1352744 kB' 'Active(anon): 131080 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48480 kB' 'AnonPages: 122556 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 139960 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.110 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.111 node0=512 expecting 512 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:49.111 00:04:49.111 real 0m0.496s 00:04:49.111 user 0m0.258s 00:04:49.111 sys 0m0.271s 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.111 19:33:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.111 ************************************ 00:04:49.111 END TEST custom_alloc 00:04:49.111 ************************************ 00:04:49.371 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:49.371 19:33:14 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:49.371 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.371 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.371 19:33:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.371 ************************************ 00:04:49.371 START TEST no_shrink_alloc 00:04:49.371 ************************************ 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.371 19:33:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.634 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.634 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7940168 kB' 'MemAvailable: 9451320 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493912 kB' 'Inactive: 1352744 kB' 'Active(anon): 131712 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122888 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 140016 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73272 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 353364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.634 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.635 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7940424 kB' 'MemAvailable: 9451572 kB' 'Buffers: 2436 kB' 'Cached: 1722968 kB' 'SwapCached: 0 kB' 'Active: 493612 kB' 'Inactive: 1352740 kB' 'Active(anon): 131412 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352740 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122588 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 140016 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73272 kB' 'KernelStack: 6304 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.636 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.637 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7939920 kB' 'MemAvailable: 9451072 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493328 kB' 'Inactive: 1352744 kB' 'Active(anon): 131128 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122512 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 140016 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73272 kB' 'KernelStack: 6288 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.638 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.639 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.640 nr_hugepages=1024 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.640 resv_hugepages=0 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.640 surplus_hugepages=0 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.640 anon_hugepages=0 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7939920 kB' 'MemAvailable: 9451072 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493208 kB' 'Inactive: 1352744 kB' 'Active(anon): 131008 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122348 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 140004 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73260 kB' 'KernelStack: 6256 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.640 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.641 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7939920 kB' 'MemUsed: 4302044 kB' 'SwapCached: 0 kB' 'Active: 493324 kB' 'Inactive: 1352744 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48480 kB' 'AnonPages: 122496 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 140004 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.642 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.643 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.644 node0=1024 expecting 1024 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.644 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.202 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.202 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.202 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7941540 kB' 'MemAvailable: 9452692 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 494152 kB' 'Inactive: 1352744 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139964 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6324 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.202 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7941540 kB' 'MemAvailable: 9452692 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493612 kB' 'Inactive: 1352744 kB' 'Active(anon): 131412 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122528 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139952 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73208 kB' 'KernelStack: 6228 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.203 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.204 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7942220 kB' 'MemAvailable: 9453372 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493488 kB' 'Inactive: 1352744 kB' 'Active(anon): 131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122692 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139960 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73216 kB' 'KernelStack: 6324 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.205 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.206 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.207 nr_hugepages=1024 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.207 resv_hugepages=0 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.207 surplus_hugepages=0 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.207 anon_hugepages=0 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7942324 kB' 'MemAvailable: 9453476 kB' 'Buffers: 2436 kB' 'Cached: 1722972 kB' 'SwapCached: 0 kB' 'Active: 493684 kB' 'Inactive: 1352744 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122588 kB' 'Mapped: 48480 kB' 'Shmem: 10464 kB' 'KReclaimable: 66744 kB' 'Slab: 139960 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73216 kB' 'KernelStack: 6320 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 350360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.207 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.208 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7942324 kB' 'MemUsed: 4299640 kB' 'SwapCached: 0 kB' 'Active: 493348 kB' 'Inactive: 1352744 kB' 'Active(anon): 131148 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352744 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1725408 kB' 'Mapped: 48480 kB' 'AnonPages: 122324 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66744 kB' 'Slab: 139960 kB' 'SReclaimable: 66744 kB' 'SUnreclaim: 73216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.209 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.210 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.211 node0=1024 expecting 1024 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.211 00:04:50.211 real 0m0.969s 00:04:50.211 user 0m0.498s 00:04:50.211 sys 0m0.532s 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.211 19:33:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:50.211 ************************************ 00:04:50.211 END TEST no_shrink_alloc 00:04:50.211 ************************************ 00:04:50.211 19:33:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.211 19:33:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.211 00:04:50.211 real 0m4.239s 00:04:50.211 user 0m2.127s 00:04:50.211 sys 0m2.216s 00:04:50.211 19:33:15 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.211 ************************************ 00:04:50.211 END TEST hugepages 00:04:50.211 19:33:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.211 ************************************ 00:04:50.470 19:33:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:50.470 19:33:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:50.470 19:33:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.470 19:33:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.470 19:33:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.470 ************************************ 00:04:50.470 START TEST driver 00:04:50.470 ************************************ 00:04:50.470 19:33:15 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:50.470 * Looking for test storage... 00:04:50.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:50.470 19:33:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:50.470 19:33:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.470 19:33:16 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.035 19:33:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:51.035 19:33:16 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.035 19:33:16 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.035 19:33:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:51.035 ************************************ 00:04:51.035 START TEST guess_driver 00:04:51.035 ************************************ 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:51.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:51.035 Looking for driver=uio_pci_generic 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.035 19:33:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:51.601 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.859 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:51.859 19:33:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:51.859 19:33:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.859 19:33:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.117 00:04:52.117 real 0m1.332s 00:04:52.117 user 0m0.477s 00:04:52.117 sys 0m0.871s 00:04:52.117 19:33:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.117 19:33:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:52.117 ************************************ 00:04:52.117 END TEST guess_driver 00:04:52.117 ************************************ 00:04:52.375 19:33:17 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:52.375 ************************************ 00:04:52.375 END TEST driver 00:04:52.375 ************************************ 00:04:52.375 00:04:52.375 real 0m1.969s 00:04:52.375 user 0m0.703s 00:04:52.375 sys 0m1.332s 00:04:52.375 19:33:17 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.375 19:33:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:52.375 19:33:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:52.375 19:33:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:52.375 19:33:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.375 19:33:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.375 19:33:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.375 ************************************ 00:04:52.375 START TEST devices 00:04:52.375 ************************************ 00:04:52.375 19:33:17 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:52.375 * Looking for test storage... 00:04:52.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:52.375 19:33:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:52.375 19:33:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:52.375 19:33:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.375 19:33:18 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:53.310 19:33:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:53.310 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:53.310 19:33:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:53.310 19:33:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:53.310 No valid GPT data, bailing 00:04:53.310 19:33:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:53.311 No valid GPT data, bailing 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:53.311 No valid GPT data, bailing 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:53.311 19:33:18 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:53.311 19:33:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:53.311 19:33:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:53.311 No valid GPT data, bailing 00:04:53.311 19:33:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:53.311 19:33:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:53.311 19:33:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:53.311 19:33:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:53.311 19:33:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:53.311 19:33:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:53.311 19:33:19 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:53.311 19:33:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:53.311 19:33:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:53.311 19:33:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:53.311 19:33:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:53.311 19:33:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:53.311 19:33:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:53.311 19:33:19 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.311 19:33:19 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.311 19:33:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.311 ************************************ 00:04:53.311 START TEST nvme_mount 00:04:53.311 ************************************ 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:53.311 19:33:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:54.687 Creating new GPT entries in memory. 00:04:54.687 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:54.687 other utilities. 00:04:54.687 19:33:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:54.687 19:33:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.687 19:33:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.687 19:33:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.687 19:33:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:55.620 Creating new GPT entries in memory. 00:04:55.620 The operation has completed successfully. 00:04:55.620 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:55.620 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.620 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58804 00:04:55.620 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.620 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.621 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:55.878 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.878 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.136 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:56.136 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:56.136 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:56.136 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.136 19:33:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.394 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.651 19:33:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.909 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.166 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.166 00:04:57.166 real 0m3.741s 00:04:57.166 user 0m0.559s 00:04:57.166 sys 0m0.935s 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.166 19:33:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:57.166 ************************************ 00:04:57.166 END TEST nvme_mount 00:04:57.166 ************************************ 00:04:57.166 19:33:22 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:57.166 19:33:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:57.166 19:33:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.166 19:33:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.166 19:33:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.166 ************************************ 00:04:57.166 START TEST dm_mount 00:04:57.166 ************************************ 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.166 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.167 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.167 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.167 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.167 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:57.167 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:57.167 19:33:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:58.099 Creating new GPT entries in memory. 00:04:58.099 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:58.099 other utilities. 00:04:58.099 19:33:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:58.099 19:33:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.099 19:33:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.099 19:33:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.099 19:33:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:59.471 Creating new GPT entries in memory. 00:04:59.471 The operation has completed successfully. 00:04:59.471 19:33:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.471 19:33:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.471 19:33:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.471 19:33:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.471 19:33:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:00.402 The operation has completed successfully. 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59232 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.402 19:33:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.402 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.402 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:00.402 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:00.402 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.402 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.402 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.659 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.916 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.230 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.230 00:05:01.230 real 0m4.169s 00:05:01.230 user 0m0.451s 00:05:01.230 sys 0m0.676s 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.230 19:33:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.230 ************************************ 00:05:01.230 END TEST dm_mount 00:05:01.230 ************************************ 00:05:01.488 19:33:27 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:01.488 19:33:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:01.488 19:33:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.488 19:33:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.488 19:33:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.488 19:33:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.488 19:33:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.488 19:33:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.744 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.744 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.744 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.744 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.744 19:33:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.744 19:33:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.744 19:33:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.744 19:33:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.744 19:33:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.744 19:33:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.744 19:33:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.744 00:05:01.744 real 0m9.353s 00:05:01.744 user 0m1.607s 00:05:01.744 sys 0m2.187s 00:05:01.744 19:33:27 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.744 19:33:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.744 ************************************ 00:05:01.744 END TEST devices 00:05:01.744 ************************************ 00:05:01.744 19:33:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:01.744 ************************************ 00:05:01.744 END TEST setup.sh 00:05:01.744 ************************************ 00:05:01.744 00:05:01.744 real 0m20.034s 00:05:01.744 user 0m6.365s 00:05:01.744 sys 0m8.215s 00:05:01.744 19:33:27 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.744 19:33:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.744 19:33:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.744 19:33:27 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.307 Hugepages 00:05:02.307 node hugesize free / total 00:05:02.307 node0 1048576kB 0 / 0 00:05:02.307 node0 2048kB 2048 / 2048 00:05:02.307 00:05:02.307 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.307 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.564 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:02.564 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:02.564 19:33:28 -- spdk/autotest.sh@130 -- # uname -s 00:05:02.564 19:33:28 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:02.564 19:33:28 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:02.564 19:33:28 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.387 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.387 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.387 19:33:29 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:04.335 19:33:30 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:04.335 19:33:30 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:04.335 19:33:30 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.335 19:33:30 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:04.335 19:33:30 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:04.335 19:33:30 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:04.335 19:33:30 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.335 19:33:30 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.335 19:33:30 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:04.592 19:33:30 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:04.592 19:33:30 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.592 19:33:30 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.850 Waiting for block devices as requested 00:05:04.850 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:05.108 19:33:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:05.108 19:33:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:05.108 19:33:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:05.108 19:33:30 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:05.108 19:33:30 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:05.108 19:33:30 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1557 -- # continue 00:05:05.108 19:33:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:05.108 19:33:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:05.108 19:33:30 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:05.108 19:33:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:05.108 19:33:30 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:05.108 19:33:30 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:05.108 19:33:30 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:05.108 19:33:30 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:05.108 19:33:30 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:05.108 19:33:30 -- common/autotest_common.sh@1557 -- # continue 00:05:05.108 19:33:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:05.108 19:33:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.108 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:05:05.108 19:33:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:05.108 19:33:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.108 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:05:05.108 19:33:30 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.930 19:33:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:05.930 19:33:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.930 19:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:05.930 19:33:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:05.930 19:33:31 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:05.930 19:33:31 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.930 19:33:31 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:05.930 19:33:31 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:05.930 19:33:31 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:05.930 19:33:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:05.930 19:33:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:05.930 19:33:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.930 19:33:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.930 19:33:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:05.930 19:33:31 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:05.930 19:33:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:05.930 19:33:31 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:05.930 19:33:31 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:05.930 19:33:31 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:05.930 19:33:31 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.930 19:33:31 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:05.930 19:33:31 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:05.930 19:33:31 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:05.930 19:33:31 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.930 19:33:31 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:05.930 19:33:31 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:05.930 19:33:31 -- common/autotest_common.sh@1593 -- # return 0 00:05:05.930 19:33:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:05.930 19:33:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:05.930 19:33:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.930 19:33:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.930 19:33:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:05.930 19:33:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.930 19:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:05.930 19:33:31 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:05.930 19:33:31 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.930 19:33:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.930 19:33:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.930 19:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:06.188 ************************************ 00:05:06.188 START TEST env 00:05:06.188 ************************************ 00:05:06.188 19:33:31 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:06.188 * Looking for test storage... 00:05:06.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:06.188 19:33:31 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:06.188 19:33:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.188 19:33:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.188 19:33:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.188 ************************************ 00:05:06.188 START TEST env_memory 00:05:06.188 ************************************ 00:05:06.188 19:33:31 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:06.188 00:05:06.188 00:05:06.188 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.188 http://cunit.sourceforge.net/ 00:05:06.188 00:05:06.188 00:05:06.188 Suite: memory 00:05:06.188 Test: alloc and free memory map ...[2024-07-15 19:33:31.861437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.188 passed 00:05:06.188 Test: mem map translation ...[2024-07-15 19:33:31.892816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.188 [2024-07-15 19:33:31.892867] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.188 [2024-07-15 19:33:31.892922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.188 [2024-07-15 19:33:31.892934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.188 passed 00:05:06.188 Test: mem map registration ...[2024-07-15 19:33:31.957078] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:06.188 [2024-07-15 19:33:31.957140] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:06.446 passed 00:05:06.446 Test: mem map adjacent registrations ...passed 00:05:06.446 00:05:06.446 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.446 suites 1 1 n/a 0 0 00:05:06.446 tests 4 4 4 0 0 00:05:06.446 asserts 152 152 152 0 n/a 00:05:06.446 00:05:06.446 Elapsed time = 0.206 seconds 00:05:06.446 00:05:06.446 real 0m0.224s 00:05:06.446 user 0m0.206s 00:05:06.446 sys 0m0.013s 00:05:06.446 19:33:32 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.446 19:33:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.446 ************************************ 00:05:06.446 END TEST env_memory 00:05:06.446 ************************************ 00:05:06.446 19:33:32 env -- common/autotest_common.sh@1142 -- # return 0 00:05:06.446 19:33:32 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.446 19:33:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.446 19:33:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.446 19:33:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.446 ************************************ 00:05:06.446 START TEST env_vtophys 00:05:06.446 ************************************ 00:05:06.446 19:33:32 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.446 EAL: lib.eal log level changed from notice to debug 00:05:06.446 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.446 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.446 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.446 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.447 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.447 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.447 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.447 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.447 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.447 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.447 EAL: Maximum logical cores by configuration: 128 00:05:06.447 EAL: Detected CPU lcores: 10 00:05:06.447 EAL: Detected NUMA nodes: 1 00:05:06.447 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:06.447 EAL: Detected shared linkage of DPDK 00:05:06.447 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.447 EAL: Selected IOVA mode 'PA' 00:05:06.447 EAL: Probing VFIO support... 00:05:06.447 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.447 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.447 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.447 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.447 EAL: Setting up physically contiguous memory... 00:05:06.447 EAL: Setting maximum number of open files to 524288 00:05:06.447 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.447 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.447 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.447 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.447 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.447 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.447 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.447 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.447 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.447 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.447 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.447 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.447 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.447 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.447 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.447 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.447 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.447 EAL: Hugepages will be freed exactly as allocated. 00:05:06.447 EAL: No shared files mode enabled, IPC is disabled 00:05:06.447 EAL: No shared files mode enabled, IPC is disabled 00:05:06.447 EAL: TSC frequency is ~2200000 KHz 00:05:06.447 EAL: Main lcore 0 is ready (tid=7f5712effa00;cpuset=[0]) 00:05:06.447 EAL: Trying to obtain current memory policy. 00:05:06.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.447 EAL: Restoring previous memory policy: 0 00:05:06.447 EAL: request: mp_malloc_sync 00:05:06.447 EAL: No shared files mode enabled, IPC is disabled 00:05:06.447 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.447 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.705 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.705 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.705 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.705 00:05:06.705 00:05:06.705 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.705 http://cunit.sourceforge.net/ 00:05:06.705 00:05:06.705 00:05:06.705 Suite: components_suite 00:05:06.705 Test: vtophys_malloc_test ...passed 00:05:06.705 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.705 EAL: Restoring previous memory policy: 4 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.705 EAL: Trying to obtain current memory policy. 00:05:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.705 EAL: Restoring previous memory policy: 4 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.705 EAL: Trying to obtain current memory policy. 00:05:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.705 EAL: Restoring previous memory policy: 4 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.705 EAL: Trying to obtain current memory policy. 00:05:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.705 EAL: Restoring previous memory policy: 4 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.705 EAL: Trying to obtain current memory policy. 00:05:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.705 EAL: Restoring previous memory policy: 4 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.705 EAL: Trying to obtain current memory policy. 00:05:06.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.705 EAL: Restoring previous memory policy: 4 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.705 EAL: request: mp_malloc_sync 00:05:06.705 EAL: No shared files mode enabled, IPC is disabled 00:05:06.705 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.706 EAL: Trying to obtain current memory policy. 00:05:06.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.706 EAL: Restoring previous memory policy: 4 00:05:06.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.706 EAL: request: mp_malloc_sync 00:05:06.706 EAL: No shared files mode enabled, IPC is disabled 00:05:06.706 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.706 EAL: request: mp_malloc_sync 00:05:06.706 EAL: No shared files mode enabled, IPC is disabled 00:05:06.706 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.706 EAL: Trying to obtain current memory policy. 00:05:06.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.706 EAL: Restoring previous memory policy: 4 00:05:06.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.706 EAL: request: mp_malloc_sync 00:05:06.706 EAL: No shared files mode enabled, IPC is disabled 00:05:06.706 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.974 EAL: request: mp_malloc_sync 00:05:06.974 EAL: No shared files mode enabled, IPC is disabled 00:05:06.974 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.974 EAL: Trying to obtain current memory policy. 00:05:06.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.974 EAL: Restoring previous memory policy: 4 00:05:06.974 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.974 EAL: request: mp_malloc_sync 00:05:06.974 EAL: No shared files mode enabled, IPC is disabled 00:05:06.974 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.233 EAL: request: mp_malloc_sync 00:05:07.233 EAL: No shared files mode enabled, IPC is disabled 00:05:07.233 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.233 EAL: Trying to obtain current memory policy. 00:05:07.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.491 EAL: Restoring previous memory policy: 4 00:05:07.491 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.491 EAL: request: mp_malloc_sync 00:05:07.491 EAL: No shared files mode enabled, IPC is disabled 00:05:07.491 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.006 passed 00:05:08.006 00:05:08.006 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.006 suites 1 1 n/a 0 0 00:05:08.006 tests 2 2 2 0 0 00:05:08.006 asserts 5337 5337 5337 0 n/a 00:05:08.006 00:05:08.006 Elapsed time = 1.281 seconds 00:05:08.006 EAL: request: mp_malloc_sync 00:05:08.006 EAL: No shared files mode enabled, IPC is disabled 00:05:08.006 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.006 EAL: request: mp_malloc_sync 00:05:08.006 EAL: No shared files mode enabled, IPC is disabled 00:05:08.006 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.006 EAL: No shared files mode enabled, IPC is disabled 00:05:08.006 EAL: No shared files mode enabled, IPC is disabled 00:05:08.006 EAL: No shared files mode enabled, IPC is disabled 00:05:08.006 00:05:08.006 real 0m1.486s 00:05:08.006 user 0m0.811s 00:05:08.006 sys 0m0.536s 00:05:08.006 19:33:33 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.006 ************************************ 00:05:08.006 END TEST env_vtophys 00:05:08.006 ************************************ 00:05:08.006 19:33:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.006 19:33:33 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.006 19:33:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.006 19:33:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.006 19:33:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.006 19:33:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.006 ************************************ 00:05:08.006 START TEST env_pci 00:05:08.006 ************************************ 00:05:08.006 19:33:33 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.006 00:05:08.006 00:05:08.006 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.006 http://cunit.sourceforge.net/ 00:05:08.006 00:05:08.006 00:05:08.006 Suite: pci 00:05:08.006 Test: pci_hook ...[2024-07-15 19:33:33.634478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60425 has claimed it 00:05:08.006 passed 00:05:08.006 00:05:08.006 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.006 suites 1 1 n/a 0 0 00:05:08.006 tests 1 1 1 0 0 00:05:08.006 asserts 25 25 25 0 n/a 00:05:08.006 00:05:08.006 Elapsed time = 0.003 seconds 00:05:08.006 EAL: Cannot find device (10000:00:01.0) 00:05:08.006 EAL: Failed to attach device on primary process 00:05:08.006 00:05:08.006 real 0m0.022s 00:05:08.006 user 0m0.011s 00:05:08.006 sys 0m0.011s 00:05:08.006 19:33:33 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.006 19:33:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.006 ************************************ 00:05:08.006 END TEST env_pci 00:05:08.006 ************************************ 00:05:08.006 19:33:33 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.006 19:33:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.006 19:33:33 env -- env/env.sh@15 -- # uname 00:05:08.006 19:33:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.006 19:33:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.006 19:33:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.006 19:33:33 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:08.006 19:33:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.006 19:33:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.006 ************************************ 00:05:08.006 START TEST env_dpdk_post_init 00:05:08.006 ************************************ 00:05:08.006 19:33:33 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.006 EAL: Detected CPU lcores: 10 00:05:08.007 EAL: Detected NUMA nodes: 1 00:05:08.007 EAL: Detected shared linkage of DPDK 00:05:08.007 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.007 EAL: Selected IOVA mode 'PA' 00:05:08.264 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.264 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:08.264 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:08.264 Starting DPDK initialization... 00:05:08.264 Starting SPDK post initialization... 00:05:08.264 SPDK NVMe probe 00:05:08.264 Attaching to 0000:00:10.0 00:05:08.264 Attaching to 0000:00:11.0 00:05:08.264 Attached to 0000:00:10.0 00:05:08.264 Attached to 0000:00:11.0 00:05:08.264 Cleaning up... 00:05:08.264 00:05:08.264 real 0m0.179s 00:05:08.264 user 0m0.047s 00:05:08.264 sys 0m0.032s 00:05:08.264 19:33:33 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.264 ************************************ 00:05:08.264 END TEST env_dpdk_post_init 00:05:08.264 ************************************ 00:05:08.264 19:33:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.264 19:33:33 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.264 19:33:33 env -- env/env.sh@26 -- # uname 00:05:08.264 19:33:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.264 19:33:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.264 19:33:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.264 19:33:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.264 19:33:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.264 ************************************ 00:05:08.264 START TEST env_mem_callbacks 00:05:08.264 ************************************ 00:05:08.264 19:33:33 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.264 EAL: Detected CPU lcores: 10 00:05:08.264 EAL: Detected NUMA nodes: 1 00:05:08.264 EAL: Detected shared linkage of DPDK 00:05:08.264 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.264 EAL: Selected IOVA mode 'PA' 00:05:08.522 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.522 00:05:08.522 00:05:08.522 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.522 http://cunit.sourceforge.net/ 00:05:08.522 00:05:08.522 00:05:08.522 Suite: memory 00:05:08.522 Test: test ... 00:05:08.522 register 0x200000200000 2097152 00:05:08.522 malloc 3145728 00:05:08.522 register 0x200000400000 4194304 00:05:08.522 buf 0x200000500000 len 3145728 PASSED 00:05:08.522 malloc 64 00:05:08.522 buf 0x2000004fff40 len 64 PASSED 00:05:08.522 malloc 4194304 00:05:08.522 register 0x200000800000 6291456 00:05:08.522 buf 0x200000a00000 len 4194304 PASSED 00:05:08.522 free 0x200000500000 3145728 00:05:08.522 free 0x2000004fff40 64 00:05:08.522 unregister 0x200000400000 4194304 PASSED 00:05:08.522 free 0x200000a00000 4194304 00:05:08.522 unregister 0x200000800000 6291456 PASSED 00:05:08.522 malloc 8388608 00:05:08.522 register 0x200000400000 10485760 00:05:08.522 buf 0x200000600000 len 8388608 PASSED 00:05:08.522 free 0x200000600000 8388608 00:05:08.522 unregister 0x200000400000 10485760 PASSED 00:05:08.522 passed 00:05:08.522 00:05:08.522 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.522 suites 1 1 n/a 0 0 00:05:08.522 tests 1 1 1 0 0 00:05:08.522 asserts 15 15 15 0 n/a 00:05:08.522 00:05:08.522 Elapsed time = 0.009 seconds 00:05:08.522 00:05:08.522 real 0m0.146s 00:05:08.522 user 0m0.018s 00:05:08.522 sys 0m0.026s 00:05:08.522 19:33:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.522 ************************************ 00:05:08.522 END TEST env_mem_callbacks 00:05:08.522 ************************************ 00:05:08.522 19:33:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:08.522 19:33:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.522 00:05:08.522 real 0m2.395s 00:05:08.522 user 0m1.212s 00:05:08.522 sys 0m0.818s 00:05:08.522 19:33:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.522 19:33:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.522 ************************************ 00:05:08.522 END TEST env 00:05:08.522 ************************************ 00:05:08.522 19:33:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.522 19:33:34 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.522 19:33:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.522 19:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.522 19:33:34 -- common/autotest_common.sh@10 -- # set +x 00:05:08.522 ************************************ 00:05:08.522 START TEST rpc 00:05:08.522 ************************************ 00:05:08.522 19:33:34 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.522 * Looking for test storage... 00:05:08.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.522 19:33:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60529 00:05:08.522 19:33:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:08.523 19:33:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.523 19:33:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60529 00:05:08.523 19:33:34 rpc -- common/autotest_common.sh@829 -- # '[' -z 60529 ']' 00:05:08.523 19:33:34 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.523 19:33:34 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.523 19:33:34 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.523 19:33:34 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.523 19:33:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.780 [2024-07-15 19:33:34.328211] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:08.780 [2024-07-15 19:33:34.328801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:05:08.780 [2024-07-15 19:33:34.463432] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.038 [2024-07-15 19:33:34.597255] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.038 [2024-07-15 19:33:34.597308] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60529' to capture a snapshot of events at runtime. 00:05:09.038 [2024-07-15 19:33:34.597319] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.038 [2024-07-15 19:33:34.597328] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.039 [2024-07-15 19:33:34.597335] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60529 for offline analysis/debug. 00:05:09.039 [2024-07-15 19:33:34.597360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.604 19:33:35 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.604 19:33:35 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:09.604 19:33:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.604 19:33:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.604 19:33:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.604 19:33:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.604 19:33:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.604 19:33:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.604 19:33:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.604 ************************************ 00:05:09.604 START TEST rpc_integrity 00:05:09.604 ************************************ 00:05:09.604 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:09.604 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.604 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.604 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.604 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.604 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.604 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.862 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.862 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.862 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.862 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.862 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.862 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.862 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.862 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.862 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.862 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.862 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.862 { 00:05:09.862 "aliases": [ 00:05:09.862 "248bd027-e12a-424c-9e6d-c00088a7d200" 00:05:09.862 ], 00:05:09.862 "assigned_rate_limits": { 00:05:09.862 "r_mbytes_per_sec": 0, 00:05:09.862 "rw_ios_per_sec": 0, 00:05:09.862 "rw_mbytes_per_sec": 0, 00:05:09.862 "w_mbytes_per_sec": 0 00:05:09.862 }, 00:05:09.862 "block_size": 512, 00:05:09.862 "claimed": false, 00:05:09.862 "driver_specific": {}, 00:05:09.862 "memory_domains": [ 00:05:09.862 { 00:05:09.862 "dma_device_id": "system", 00:05:09.862 "dma_device_type": 1 00:05:09.862 }, 00:05:09.862 { 00:05:09.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.862 "dma_device_type": 2 00:05:09.862 } 00:05:09.862 ], 00:05:09.862 "name": "Malloc0", 00:05:09.862 "num_blocks": 16384, 00:05:09.862 "product_name": "Malloc disk", 00:05:09.862 "supported_io_types": { 00:05:09.862 "abort": true, 00:05:09.863 "compare": false, 00:05:09.863 "compare_and_write": false, 00:05:09.863 "copy": true, 00:05:09.863 "flush": true, 00:05:09.863 "get_zone_info": false, 00:05:09.863 "nvme_admin": false, 00:05:09.863 "nvme_io": false, 00:05:09.863 "nvme_io_md": false, 00:05:09.863 "nvme_iov_md": false, 00:05:09.863 "read": true, 00:05:09.863 "reset": true, 00:05:09.863 "seek_data": false, 00:05:09.863 "seek_hole": false, 00:05:09.863 "unmap": true, 00:05:09.863 "write": true, 00:05:09.863 "write_zeroes": true, 00:05:09.863 "zcopy": true, 00:05:09.863 "zone_append": false, 00:05:09.863 "zone_management": false 00:05:09.863 }, 00:05:09.863 "uuid": "248bd027-e12a-424c-9e6d-c00088a7d200", 00:05:09.863 "zoned": false 00:05:09.863 } 00:05:09.863 ]' 00:05:09.863 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.863 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.863 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.863 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.863 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.863 [2024-07-15 19:33:35.488737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.863 [2024-07-15 19:33:35.488787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.863 [2024-07-15 19:33:35.488806] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeb3c70 00:05:09.863 [2024-07-15 19:33:35.488816] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.863 [2024-07-15 19:33:35.490401] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.863 [2024-07-15 19:33:35.490439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.863 Passthru0 00:05:09.863 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.863 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.863 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.863 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.863 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.863 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.863 { 00:05:09.863 "aliases": [ 00:05:09.863 "248bd027-e12a-424c-9e6d-c00088a7d200" 00:05:09.863 ], 00:05:09.863 "assigned_rate_limits": { 00:05:09.863 "r_mbytes_per_sec": 0, 00:05:09.863 "rw_ios_per_sec": 0, 00:05:09.863 "rw_mbytes_per_sec": 0, 00:05:09.863 "w_mbytes_per_sec": 0 00:05:09.863 }, 00:05:09.863 "block_size": 512, 00:05:09.863 "claim_type": "exclusive_write", 00:05:09.863 "claimed": true, 00:05:09.863 "driver_specific": {}, 00:05:09.863 "memory_domains": [ 00:05:09.863 { 00:05:09.863 "dma_device_id": "system", 00:05:09.863 "dma_device_type": 1 00:05:09.863 }, 00:05:09.863 { 00:05:09.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.863 "dma_device_type": 2 00:05:09.863 } 00:05:09.863 ], 00:05:09.863 "name": "Malloc0", 00:05:09.863 "num_blocks": 16384, 00:05:09.863 "product_name": "Malloc disk", 00:05:09.863 "supported_io_types": { 00:05:09.863 "abort": true, 00:05:09.863 "compare": false, 00:05:09.863 "compare_and_write": false, 00:05:09.863 "copy": true, 00:05:09.863 "flush": true, 00:05:09.863 "get_zone_info": false, 00:05:09.863 "nvme_admin": false, 00:05:09.863 "nvme_io": false, 00:05:09.863 "nvme_io_md": false, 00:05:09.863 "nvme_iov_md": false, 00:05:09.863 "read": true, 00:05:09.863 "reset": true, 00:05:09.863 "seek_data": false, 00:05:09.863 "seek_hole": false, 00:05:09.863 "unmap": true, 00:05:09.863 "write": true, 00:05:09.863 "write_zeroes": true, 00:05:09.863 "zcopy": true, 00:05:09.863 "zone_append": false, 00:05:09.863 "zone_management": false 00:05:09.863 }, 00:05:09.863 "uuid": "248bd027-e12a-424c-9e6d-c00088a7d200", 00:05:09.863 "zoned": false 00:05:09.863 }, 00:05:09.863 { 00:05:09.863 "aliases": [ 00:05:09.863 "537db64b-fe1c-5eaf-8667-2238c3553790" 00:05:09.863 ], 00:05:09.863 "assigned_rate_limits": { 00:05:09.863 "r_mbytes_per_sec": 0, 00:05:09.863 "rw_ios_per_sec": 0, 00:05:09.863 "rw_mbytes_per_sec": 0, 00:05:09.863 "w_mbytes_per_sec": 0 00:05:09.863 }, 00:05:09.863 "block_size": 512, 00:05:09.863 "claimed": false, 00:05:09.863 "driver_specific": { 00:05:09.863 "passthru": { 00:05:09.863 "base_bdev_name": "Malloc0", 00:05:09.863 "name": "Passthru0" 00:05:09.863 } 00:05:09.863 }, 00:05:09.863 "memory_domains": [ 00:05:09.863 { 00:05:09.863 "dma_device_id": "system", 00:05:09.863 "dma_device_type": 1 00:05:09.863 }, 00:05:09.863 { 00:05:09.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.863 "dma_device_type": 2 00:05:09.863 } 00:05:09.863 ], 00:05:09.863 "name": "Passthru0", 00:05:09.863 "num_blocks": 16384, 00:05:09.863 "product_name": "passthru", 00:05:09.863 "supported_io_types": { 00:05:09.863 "abort": true, 00:05:09.863 "compare": false, 00:05:09.863 "compare_and_write": false, 00:05:09.863 "copy": true, 00:05:09.863 "flush": true, 00:05:09.863 "get_zone_info": false, 00:05:09.863 "nvme_admin": false, 00:05:09.863 "nvme_io": false, 00:05:09.863 "nvme_io_md": false, 00:05:09.863 "nvme_iov_md": false, 00:05:09.863 "read": true, 00:05:09.863 "reset": true, 00:05:09.863 "seek_data": false, 00:05:09.863 "seek_hole": false, 00:05:09.863 "unmap": true, 00:05:09.863 "write": true, 00:05:09.863 "write_zeroes": true, 00:05:09.863 "zcopy": true, 00:05:09.863 "zone_append": false, 00:05:09.863 "zone_management": false 00:05:09.863 }, 00:05:09.864 "uuid": "537db64b-fe1c-5eaf-8667-2238c3553790", 00:05:09.864 "zoned": false 00:05:09.864 } 00:05:09.864 ]' 00:05:09.864 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.864 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.864 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.864 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.864 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.864 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.864 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.864 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.122 19:33:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.122 00:05:10.122 real 0m0.334s 00:05:10.122 user 0m0.222s 00:05:10.122 sys 0m0.036s 00:05:10.122 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.122 19:33:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 ************************************ 00:05:10.122 END TEST rpc_integrity 00:05:10.122 ************************************ 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.122 19:33:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 ************************************ 00:05:10.122 START TEST rpc_plugins 00:05:10.122 ************************************ 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.122 { 00:05:10.122 "aliases": [ 00:05:10.122 "e0c6cbe0-c876-49f5-b15c-f176f071cac3" 00:05:10.122 ], 00:05:10.122 "assigned_rate_limits": { 00:05:10.122 "r_mbytes_per_sec": 0, 00:05:10.122 "rw_ios_per_sec": 0, 00:05:10.122 "rw_mbytes_per_sec": 0, 00:05:10.122 "w_mbytes_per_sec": 0 00:05:10.122 }, 00:05:10.122 "block_size": 4096, 00:05:10.122 "claimed": false, 00:05:10.122 "driver_specific": {}, 00:05:10.122 "memory_domains": [ 00:05:10.122 { 00:05:10.122 "dma_device_id": "system", 00:05:10.122 "dma_device_type": 1 00:05:10.122 }, 00:05:10.122 { 00:05:10.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.122 "dma_device_type": 2 00:05:10.122 } 00:05:10.122 ], 00:05:10.122 "name": "Malloc1", 00:05:10.122 "num_blocks": 256, 00:05:10.122 "product_name": "Malloc disk", 00:05:10.122 "supported_io_types": { 00:05:10.122 "abort": true, 00:05:10.122 "compare": false, 00:05:10.122 "compare_and_write": false, 00:05:10.122 "copy": true, 00:05:10.122 "flush": true, 00:05:10.122 "get_zone_info": false, 00:05:10.122 "nvme_admin": false, 00:05:10.122 "nvme_io": false, 00:05:10.122 "nvme_io_md": false, 00:05:10.122 "nvme_iov_md": false, 00:05:10.122 "read": true, 00:05:10.122 "reset": true, 00:05:10.122 "seek_data": false, 00:05:10.122 "seek_hole": false, 00:05:10.122 "unmap": true, 00:05:10.122 "write": true, 00:05:10.122 "write_zeroes": true, 00:05:10.122 "zcopy": true, 00:05:10.122 "zone_append": false, 00:05:10.122 "zone_management": false 00:05:10.122 }, 00:05:10.122 "uuid": "e0c6cbe0-c876-49f5-b15c-f176f071cac3", 00:05:10.122 "zoned": false 00:05:10.122 } 00:05:10.122 ]' 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:10.122 19:33:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.122 00:05:10.122 real 0m0.152s 00:05:10.122 user 0m0.098s 00:05:10.122 sys 0m0.019s 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.122 ************************************ 00:05:10.122 END TEST rpc_plugins 00:05:10.122 19:33:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.122 ************************************ 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.122 19:33:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.122 19:33:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.380 ************************************ 00:05:10.380 START TEST rpc_trace_cmd_test 00:05:10.380 ************************************ 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:10.380 "bdev": { 00:05:10.380 "mask": "0x8", 00:05:10.380 "tpoint_mask": "0xffffffffffffffff" 00:05:10.380 }, 00:05:10.380 "bdev_nvme": { 00:05:10.380 "mask": "0x4000", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "blobfs": { 00:05:10.380 "mask": "0x80", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "dsa": { 00:05:10.380 "mask": "0x200", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "ftl": { 00:05:10.380 "mask": "0x40", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "iaa": { 00:05:10.380 "mask": "0x1000", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "iscsi_conn": { 00:05:10.380 "mask": "0x2", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "nvme_pcie": { 00:05:10.380 "mask": "0x800", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "nvme_tcp": { 00:05:10.380 "mask": "0x2000", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "nvmf_rdma": { 00:05:10.380 "mask": "0x10", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "nvmf_tcp": { 00:05:10.380 "mask": "0x20", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "scsi": { 00:05:10.380 "mask": "0x4", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "sock": { 00:05:10.380 "mask": "0x8000", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "thread": { 00:05:10.380 "mask": "0x400", 00:05:10.380 "tpoint_mask": "0x0" 00:05:10.380 }, 00:05:10.380 "tpoint_group_mask": "0x8", 00:05:10.380 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60529" 00:05:10.380 }' 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:10.380 19:33:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.380 19:33:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.381 19:33:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.381 19:33:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.381 19:33:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.381 19:33:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.381 19:33:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.639 19:33:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.639 00:05:10.639 real 0m0.264s 00:05:10.639 user 0m0.230s 00:05:10.639 sys 0m0.023s 00:05:10.639 19:33:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.639 19:33:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.639 ************************************ 00:05:10.639 END TEST rpc_trace_cmd_test 00:05:10.639 ************************************ 00:05:10.639 19:33:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.639 19:33:36 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:10.639 19:33:36 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:10.639 19:33:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.639 19:33:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.639 19:33:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.639 ************************************ 00:05:10.639 START TEST go_rpc 00:05:10.639 ************************************ 00:05:10.639 19:33:36 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.639 19:33:36 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.639 19:33:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.639 19:33:36 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["d5732505-aa61-46b9-9f59-28d9cb5bc819"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"d5732505-aa61-46b9-9f59-28d9cb5bc819","zoned":false}]' 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.639 19:33:36 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.639 19:33:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.639 19:33:36 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:10.639 19:33:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:10.898 19:33:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:10.898 00:05:10.898 real 0m0.229s 00:05:10.898 user 0m0.158s 00:05:10.898 sys 0m0.036s 00:05:10.898 19:33:36 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.898 19:33:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.898 ************************************ 00:05:10.898 END TEST go_rpc 00:05:10.898 ************************************ 00:05:10.898 19:33:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.898 19:33:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.898 19:33:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.898 19:33:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.898 19:33:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.898 19:33:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.898 ************************************ 00:05:10.898 START TEST rpc_daemon_integrity 00:05:10.898 ************************************ 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.898 { 00:05:10.898 "aliases": [ 00:05:10.898 "de39203c-6eb0-4bb4-b40b-19df41893cc5" 00:05:10.898 ], 00:05:10.898 "assigned_rate_limits": { 00:05:10.898 "r_mbytes_per_sec": 0, 00:05:10.898 "rw_ios_per_sec": 0, 00:05:10.898 "rw_mbytes_per_sec": 0, 00:05:10.898 "w_mbytes_per_sec": 0 00:05:10.898 }, 00:05:10.898 "block_size": 512, 00:05:10.898 "claimed": false, 00:05:10.898 "driver_specific": {}, 00:05:10.898 "memory_domains": [ 00:05:10.898 { 00:05:10.898 "dma_device_id": "system", 00:05:10.898 "dma_device_type": 1 00:05:10.898 }, 00:05:10.898 { 00:05:10.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.898 "dma_device_type": 2 00:05:10.898 } 00:05:10.898 ], 00:05:10.898 "name": "Malloc3", 00:05:10.898 "num_blocks": 16384, 00:05:10.898 "product_name": "Malloc disk", 00:05:10.898 "supported_io_types": { 00:05:10.898 "abort": true, 00:05:10.898 "compare": false, 00:05:10.898 "compare_and_write": false, 00:05:10.898 "copy": true, 00:05:10.898 "flush": true, 00:05:10.898 "get_zone_info": false, 00:05:10.898 "nvme_admin": false, 00:05:10.898 "nvme_io": false, 00:05:10.898 "nvme_io_md": false, 00:05:10.898 "nvme_iov_md": false, 00:05:10.898 "read": true, 00:05:10.898 "reset": true, 00:05:10.898 "seek_data": false, 00:05:10.898 "seek_hole": false, 00:05:10.898 "unmap": true, 00:05:10.898 "write": true, 00:05:10.898 "write_zeroes": true, 00:05:10.898 "zcopy": true, 00:05:10.898 "zone_append": false, 00:05:10.898 "zone_management": false 00:05:10.898 }, 00:05:10.898 "uuid": "de39203c-6eb0-4bb4-b40b-19df41893cc5", 00:05:10.898 "zoned": false 00:05:10.898 } 00:05:10.898 ]' 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.898 [2024-07-15 19:33:36.642149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:10.898 [2024-07-15 19:33:36.642206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.898 [2024-07-15 19:33:36.642224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf13d00 00:05:10.898 [2024-07-15 19:33:36.642234] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.898 [2024-07-15 19:33:36.643717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.898 [2024-07-15 19:33:36.643767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.898 Passthru0 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.898 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.157 { 00:05:11.157 "aliases": [ 00:05:11.157 "de39203c-6eb0-4bb4-b40b-19df41893cc5" 00:05:11.157 ], 00:05:11.157 "assigned_rate_limits": { 00:05:11.157 "r_mbytes_per_sec": 0, 00:05:11.157 "rw_ios_per_sec": 0, 00:05:11.157 "rw_mbytes_per_sec": 0, 00:05:11.157 "w_mbytes_per_sec": 0 00:05:11.157 }, 00:05:11.157 "block_size": 512, 00:05:11.157 "claim_type": "exclusive_write", 00:05:11.157 "claimed": true, 00:05:11.157 "driver_specific": {}, 00:05:11.157 "memory_domains": [ 00:05:11.157 { 00:05:11.157 "dma_device_id": "system", 00:05:11.157 "dma_device_type": 1 00:05:11.157 }, 00:05:11.157 { 00:05:11.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.157 "dma_device_type": 2 00:05:11.157 } 00:05:11.157 ], 00:05:11.157 "name": "Malloc3", 00:05:11.157 "num_blocks": 16384, 00:05:11.157 "product_name": "Malloc disk", 00:05:11.157 "supported_io_types": { 00:05:11.157 "abort": true, 00:05:11.157 "compare": false, 00:05:11.157 "compare_and_write": false, 00:05:11.157 "copy": true, 00:05:11.157 "flush": true, 00:05:11.157 "get_zone_info": false, 00:05:11.157 "nvme_admin": false, 00:05:11.157 "nvme_io": false, 00:05:11.157 "nvme_io_md": false, 00:05:11.157 "nvme_iov_md": false, 00:05:11.157 "read": true, 00:05:11.157 "reset": true, 00:05:11.157 "seek_data": false, 00:05:11.157 "seek_hole": false, 00:05:11.157 "unmap": true, 00:05:11.157 "write": true, 00:05:11.157 "write_zeroes": true, 00:05:11.157 "zcopy": true, 00:05:11.157 "zone_append": false, 00:05:11.157 "zone_management": false 00:05:11.157 }, 00:05:11.157 "uuid": "de39203c-6eb0-4bb4-b40b-19df41893cc5", 00:05:11.157 "zoned": false 00:05:11.157 }, 00:05:11.157 { 00:05:11.157 "aliases": [ 00:05:11.157 "66259f74-a6c7-5a22-a970-92742d88f1e3" 00:05:11.157 ], 00:05:11.157 "assigned_rate_limits": { 00:05:11.157 "r_mbytes_per_sec": 0, 00:05:11.157 "rw_ios_per_sec": 0, 00:05:11.157 "rw_mbytes_per_sec": 0, 00:05:11.157 "w_mbytes_per_sec": 0 00:05:11.157 }, 00:05:11.157 "block_size": 512, 00:05:11.157 "claimed": false, 00:05:11.157 "driver_specific": { 00:05:11.157 "passthru": { 00:05:11.157 "base_bdev_name": "Malloc3", 00:05:11.157 "name": "Passthru0" 00:05:11.157 } 00:05:11.157 }, 00:05:11.157 "memory_domains": [ 00:05:11.157 { 00:05:11.157 "dma_device_id": "system", 00:05:11.157 "dma_device_type": 1 00:05:11.157 }, 00:05:11.157 { 00:05:11.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.157 "dma_device_type": 2 00:05:11.157 } 00:05:11.157 ], 00:05:11.157 "name": "Passthru0", 00:05:11.157 "num_blocks": 16384, 00:05:11.157 "product_name": "passthru", 00:05:11.157 "supported_io_types": { 00:05:11.157 "abort": true, 00:05:11.157 "compare": false, 00:05:11.157 "compare_and_write": false, 00:05:11.157 "copy": true, 00:05:11.157 "flush": true, 00:05:11.157 "get_zone_info": false, 00:05:11.157 "nvme_admin": false, 00:05:11.157 "nvme_io": false, 00:05:11.157 "nvme_io_md": false, 00:05:11.157 "nvme_iov_md": false, 00:05:11.157 "read": true, 00:05:11.157 "reset": true, 00:05:11.157 "seek_data": false, 00:05:11.157 "seek_hole": false, 00:05:11.157 "unmap": true, 00:05:11.157 "write": true, 00:05:11.157 "write_zeroes": true, 00:05:11.157 "zcopy": true, 00:05:11.157 "zone_append": false, 00:05:11.157 "zone_management": false 00:05:11.157 }, 00:05:11.157 "uuid": "66259f74-a6c7-5a22-a970-92742d88f1e3", 00:05:11.157 "zoned": false 00:05:11.157 } 00:05:11.157 ]' 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.157 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.158 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.158 19:33:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.158 00:05:11.158 real 0m0.323s 00:05:11.158 user 0m0.218s 00:05:11.158 sys 0m0.036s 00:05:11.158 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.158 19:33:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.158 ************************************ 00:05:11.158 END TEST rpc_daemon_integrity 00:05:11.158 ************************************ 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.158 19:33:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.158 19:33:36 rpc -- rpc/rpc.sh@84 -- # killprocess 60529 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@948 -- # '[' -z 60529 ']' 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@952 -- # kill -0 60529 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@953 -- # uname 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60529 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60529' 00:05:11.158 killing process with pid 60529 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@967 -- # kill 60529 00:05:11.158 19:33:36 rpc -- common/autotest_common.sh@972 -- # wait 60529 00:05:11.733 00:05:11.733 real 0m3.103s 00:05:11.733 user 0m4.098s 00:05:11.733 sys 0m0.744s 00:05:11.733 19:33:37 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.733 ************************************ 00:05:11.733 END TEST rpc 00:05:11.733 ************************************ 00:05:11.733 19:33:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.733 19:33:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.733 19:33:37 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.733 19:33:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.733 19:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.733 19:33:37 -- common/autotest_common.sh@10 -- # set +x 00:05:11.733 ************************************ 00:05:11.733 START TEST skip_rpc 00:05:11.733 ************************************ 00:05:11.733 19:33:37 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.733 * Looking for test storage... 00:05:11.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.733 19:33:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.733 19:33:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:11.733 19:33:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.733 19:33:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.733 19:33:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.733 19:33:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.733 ************************************ 00:05:11.733 START TEST skip_rpc 00:05:11.733 ************************************ 00:05:11.733 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:11.733 19:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60790 00:05:11.733 19:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.733 19:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.733 19:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.733 [2024-07-15 19:33:37.457049] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:11.733 [2024-07-15 19:33:37.457141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60790 ] 00:05:11.993 [2024-07-15 19:33:37.593314] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.993 [2024-07-15 19:33:37.720044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.258 2024/07/15 19:33:42 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60790 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60790 ']' 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60790 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60790 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.258 killing process with pid 60790 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60790' 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60790 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60790 00:05:17.258 00:05:17.258 real 0m5.429s 00:05:17.258 user 0m5.039s 00:05:17.258 sys 0m0.287s 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.258 19:33:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.258 ************************************ 00:05:17.258 END TEST skip_rpc 00:05:17.258 ************************************ 00:05:17.258 19:33:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.258 19:33:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:17.258 19:33:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.258 19:33:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.258 19:33:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.258 ************************************ 00:05:17.258 START TEST skip_rpc_with_json 00:05:17.258 ************************************ 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60888 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60888 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 60888 ']' 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.258 19:33:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.258 [2024-07-15 19:33:42.966991] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:17.258 [2024-07-15 19:33:42.967128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60888 ] 00:05:17.514 [2024-07-15 19:33:43.108693] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.514 [2024-07-15 19:33:43.225703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.444 [2024-07-15 19:33:43.917694] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:18.444 2024/07/15 19:33:43 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:18.444 request: 00:05:18.444 { 00:05:18.444 "method": "nvmf_get_transports", 00:05:18.444 "params": { 00:05:18.444 "trtype": "tcp" 00:05:18.444 } 00:05:18.444 } 00:05:18.444 Got JSON-RPC error response 00:05:18.444 GoRPCClient: error on JSON-RPC call 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.444 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.444 [2024-07-15 19:33:43.929815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.445 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.445 19:33:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:18.445 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.445 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.445 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.445 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.445 { 00:05:18.445 "subsystems": [ 00:05:18.445 { 00:05:18.445 "subsystem": "keyring", 00:05:18.445 "config": [] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "iobuf", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "iobuf_set_options", 00:05:18.445 "params": { 00:05:18.445 "large_bufsize": 135168, 00:05:18.445 "large_pool_count": 1024, 00:05:18.445 "small_bufsize": 8192, 00:05:18.445 "small_pool_count": 8192 00:05:18.445 } 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "sock", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "sock_set_default_impl", 00:05:18.445 "params": { 00:05:18.445 "impl_name": "posix" 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "sock_impl_set_options", 00:05:18.445 "params": { 00:05:18.445 "enable_ktls": false, 00:05:18.445 "enable_placement_id": 0, 00:05:18.445 "enable_quickack": false, 00:05:18.445 "enable_recv_pipe": true, 00:05:18.445 "enable_zerocopy_send_client": false, 00:05:18.445 "enable_zerocopy_send_server": true, 00:05:18.445 "impl_name": "ssl", 00:05:18.445 "recv_buf_size": 4096, 00:05:18.445 "send_buf_size": 4096, 00:05:18.445 "tls_version": 0, 00:05:18.445 "zerocopy_threshold": 0 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "sock_impl_set_options", 00:05:18.445 "params": { 00:05:18.445 "enable_ktls": false, 00:05:18.445 "enable_placement_id": 0, 00:05:18.445 "enable_quickack": false, 00:05:18.445 "enable_recv_pipe": true, 00:05:18.445 "enable_zerocopy_send_client": false, 00:05:18.445 "enable_zerocopy_send_server": true, 00:05:18.445 "impl_name": "posix", 00:05:18.445 "recv_buf_size": 2097152, 00:05:18.445 "send_buf_size": 2097152, 00:05:18.445 "tls_version": 0, 00:05:18.445 "zerocopy_threshold": 0 00:05:18.445 } 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "vmd", 00:05:18.445 "config": [] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "accel", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "accel_set_options", 00:05:18.445 "params": { 00:05:18.445 "buf_count": 2048, 00:05:18.445 "large_cache_size": 16, 00:05:18.445 "sequence_count": 2048, 00:05:18.445 "small_cache_size": 128, 00:05:18.445 "task_count": 2048 00:05:18.445 } 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "bdev", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "bdev_set_options", 00:05:18.445 "params": { 00:05:18.445 "bdev_auto_examine": true, 00:05:18.445 "bdev_io_cache_size": 256, 00:05:18.445 "bdev_io_pool_size": 65535, 00:05:18.445 "iobuf_large_cache_size": 16, 00:05:18.445 "iobuf_small_cache_size": 128 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "bdev_raid_set_options", 00:05:18.445 "params": { 00:05:18.445 "process_window_size_kb": 1024 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "bdev_iscsi_set_options", 00:05:18.445 "params": { 00:05:18.445 "timeout_sec": 30 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "bdev_nvme_set_options", 00:05:18.445 "params": { 00:05:18.445 "action_on_timeout": "none", 00:05:18.445 "allow_accel_sequence": false, 00:05:18.445 "arbitration_burst": 0, 00:05:18.445 "bdev_retry_count": 3, 00:05:18.445 "ctrlr_loss_timeout_sec": 0, 00:05:18.445 "delay_cmd_submit": true, 00:05:18.445 "dhchap_dhgroups": [ 00:05:18.445 "null", 00:05:18.445 "ffdhe2048", 00:05:18.445 "ffdhe3072", 00:05:18.445 "ffdhe4096", 00:05:18.445 "ffdhe6144", 00:05:18.445 "ffdhe8192" 00:05:18.445 ], 00:05:18.445 "dhchap_digests": [ 00:05:18.445 "sha256", 00:05:18.445 "sha384", 00:05:18.445 "sha512" 00:05:18.445 ], 00:05:18.445 "disable_auto_failback": false, 00:05:18.445 "fast_io_fail_timeout_sec": 0, 00:05:18.445 "generate_uuids": false, 00:05:18.445 "high_priority_weight": 0, 00:05:18.445 "io_path_stat": false, 00:05:18.445 "io_queue_requests": 0, 00:05:18.445 "keep_alive_timeout_ms": 10000, 00:05:18.445 "low_priority_weight": 0, 00:05:18.445 "medium_priority_weight": 0, 00:05:18.445 "nvme_adminq_poll_period_us": 10000, 00:05:18.445 "nvme_error_stat": false, 00:05:18.445 "nvme_ioq_poll_period_us": 0, 00:05:18.445 "rdma_cm_event_timeout_ms": 0, 00:05:18.445 "rdma_max_cq_size": 0, 00:05:18.445 "rdma_srq_size": 0, 00:05:18.445 "reconnect_delay_sec": 0, 00:05:18.445 "timeout_admin_us": 0, 00:05:18.445 "timeout_us": 0, 00:05:18.445 "transport_ack_timeout": 0, 00:05:18.445 "transport_retry_count": 4, 00:05:18.445 "transport_tos": 0 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "bdev_nvme_set_hotplug", 00:05:18.445 "params": { 00:05:18.445 "enable": false, 00:05:18.445 "period_us": 100000 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "bdev_wait_for_examine" 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "scsi", 00:05:18.445 "config": null 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "scheduler", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "framework_set_scheduler", 00:05:18.445 "params": { 00:05:18.445 "name": "static" 00:05:18.445 } 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "vhost_scsi", 00:05:18.445 "config": [] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "vhost_blk", 00:05:18.445 "config": [] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "ublk", 00:05:18.445 "config": [] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "nbd", 00:05:18.445 "config": [] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "nvmf", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "nvmf_set_config", 00:05:18.445 "params": { 00:05:18.445 "admin_cmd_passthru": { 00:05:18.445 "identify_ctrlr": false 00:05:18.445 }, 00:05:18.445 "discovery_filter": "match_any" 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "nvmf_set_max_subsystems", 00:05:18.445 "params": { 00:05:18.445 "max_subsystems": 1024 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "nvmf_set_crdt", 00:05:18.445 "params": { 00:05:18.445 "crdt1": 0, 00:05:18.445 "crdt2": 0, 00:05:18.445 "crdt3": 0 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "nvmf_create_transport", 00:05:18.445 "params": { 00:05:18.445 "abort_timeout_sec": 1, 00:05:18.445 "ack_timeout": 0, 00:05:18.445 "buf_cache_size": 4294967295, 00:05:18.445 "c2h_success": true, 00:05:18.445 "data_wr_pool_size": 0, 00:05:18.445 "dif_insert_or_strip": false, 00:05:18.445 "in_capsule_data_size": 4096, 00:05:18.445 "io_unit_size": 131072, 00:05:18.445 "max_aq_depth": 128, 00:05:18.445 "max_io_qpairs_per_ctrlr": 127, 00:05:18.445 "max_io_size": 131072, 00:05:18.445 "max_queue_depth": 128, 00:05:18.445 "num_shared_buffers": 511, 00:05:18.445 "sock_priority": 0, 00:05:18.445 "trtype": "TCP", 00:05:18.445 "zcopy": false 00:05:18.445 } 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "iscsi", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "iscsi_set_options", 00:05:18.445 "params": { 00:05:18.445 "allow_duplicated_isid": false, 00:05:18.445 "chap_group": 0, 00:05:18.445 "data_out_pool_size": 2048, 00:05:18.445 "default_time2retain": 20, 00:05:18.445 "default_time2wait": 2, 00:05:18.445 "disable_chap": false, 00:05:18.445 "error_recovery_level": 0, 00:05:18.445 "first_burst_length": 8192, 00:05:18.445 "immediate_data": true, 00:05:18.445 "immediate_data_pool_size": 16384, 00:05:18.445 "max_connections_per_session": 2, 00:05:18.445 "max_large_datain_per_connection": 64, 00:05:18.445 "max_queue_depth": 64, 00:05:18.446 "max_r2t_per_connection": 4, 00:05:18.446 "max_sessions": 128, 00:05:18.446 "mutual_chap": false, 00:05:18.446 "node_base": "iqn.2016-06.io.spdk", 00:05:18.446 "nop_in_interval": 30, 00:05:18.446 "nop_timeout": 60, 00:05:18.446 "pdu_pool_size": 36864, 00:05:18.446 "require_chap": false 00:05:18.446 } 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 } 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60888 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60888 ']' 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60888 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60888 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.446 killing process with pid 60888 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60888' 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60888 00:05:18.446 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60888 00:05:19.022 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60922 00:05:19.022 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.022 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.280 19:33:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60922 00:05:24.280 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 60922 ']' 00:05:24.280 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 60922 00:05:24.280 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60922 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.281 killing process with pid 60922 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60922' 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 60922 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 60922 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:24.281 00:05:24.281 real 0m7.107s 00:05:24.281 user 0m6.777s 00:05:24.281 sys 0m0.705s 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.281 19:33:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.281 ************************************ 00:05:24.281 END TEST skip_rpc_with_json 00:05:24.281 ************************************ 00:05:24.281 19:33:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.281 19:33:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.281 19:33:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.281 19:33:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.281 19:33:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.281 ************************************ 00:05:24.281 START TEST skip_rpc_with_delay 00:05:24.281 ************************************ 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:24.281 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.540 [2024-07-15 19:33:50.117516] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.540 [2024-07-15 19:33:50.117677] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:24.540 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:24.540 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.540 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.540 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.540 00:05:24.540 real 0m0.098s 00:05:24.540 user 0m0.065s 00:05:24.540 sys 0m0.032s 00:05:24.540 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.540 ************************************ 00:05:24.540 END TEST skip_rpc_with_delay 00:05:24.540 ************************************ 00:05:24.540 19:33:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.540 19:33:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.540 19:33:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.540 19:33:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.540 19:33:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.540 19:33:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.540 19:33:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.540 19:33:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.540 ************************************ 00:05:24.540 START TEST exit_on_failed_rpc_init 00:05:24.540 ************************************ 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61037 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61037 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61037 ']' 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.540 19:33:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.540 [2024-07-15 19:33:50.267377] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:24.540 [2024-07-15 19:33:50.267490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:05:24.799 [2024-07-15 19:33:50.406090] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.799 [2024-07-15 19:33:50.543938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:25.734 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.734 [2024-07-15 19:33:51.411232] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:25.734 [2024-07-15 19:33:51.411338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61067 ] 00:05:25.992 [2024-07-15 19:33:51.550751] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.992 [2024-07-15 19:33:51.667735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.992 [2024-07-15 19:33:51.667839] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.992 [2024-07-15 19:33:51.667856] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.992 [2024-07-15 19:33:51.667866] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61037 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61037 ']' 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61037 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61037 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.250 killing process with pid 61037 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61037' 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61037 00:05:26.250 19:33:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61037 00:05:26.509 00:05:26.509 real 0m2.003s 00:05:26.509 user 0m2.395s 00:05:26.509 sys 0m0.480s 00:05:26.509 19:33:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.509 19:33:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.509 ************************************ 00:05:26.509 END TEST exit_on_failed_rpc_init 00:05:26.509 ************************************ 00:05:26.509 19:33:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.509 19:33:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.509 00:05:26.509 real 0m14.933s 00:05:26.509 user 0m14.365s 00:05:26.509 sys 0m1.697s 00:05:26.509 19:33:52 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.509 19:33:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.509 ************************************ 00:05:26.509 END TEST skip_rpc 00:05:26.509 ************************************ 00:05:26.509 19:33:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.509 19:33:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.509 19:33:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.509 19:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.509 19:33:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.767 ************************************ 00:05:26.767 START TEST rpc_client 00:05:26.767 ************************************ 00:05:26.767 19:33:52 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.767 * Looking for test storage... 00:05:26.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:26.767 19:33:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:26.767 OK 00:05:26.767 19:33:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.767 00:05:26.767 real 0m0.105s 00:05:26.767 user 0m0.052s 00:05:26.767 sys 0m0.059s 00:05:26.767 19:33:52 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.767 19:33:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:26.767 ************************************ 00:05:26.767 END TEST rpc_client 00:05:26.767 ************************************ 00:05:26.767 19:33:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.767 19:33:52 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.767 19:33:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.767 19:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.767 19:33:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.767 ************************************ 00:05:26.767 START TEST json_config 00:05:26.767 ************************************ 00:05:26.767 19:33:52 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.767 19:33:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:26.767 19:33:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:26.767 19:33:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.768 19:33:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.768 19:33:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.768 19:33:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.768 19:33:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.768 19:33:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.768 19:33:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.768 19:33:52 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.768 19:33:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@47 -- # : 0 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.768 19:33:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.768 INFO: JSON configuration test init 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.768 19:33:52 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.768 19:33:52 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.768 19:33:52 json_config -- json_config/common.sh@10 -- # shift 00:05:26.768 19:33:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.768 19:33:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.768 19:33:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.768 19:33:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.768 19:33:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.768 19:33:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61185 00:05:26.768 Waiting for target to run... 00:05:26.768 19:33:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.768 19:33:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.768 19:33:52 json_config -- json_config/common.sh@25 -- # waitforlisten 61185 /var/tmp/spdk_tgt.sock 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@829 -- # '[' -z 61185 ']' 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.768 19:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.026 [2024-07-15 19:33:52.605817] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:27.026 [2024-07-15 19:33:52.606448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61185 ] 00:05:27.284 [2024-07-15 19:33:53.033312] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.543 [2024-07-15 19:33:53.137429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.799 19:33:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.799 19:33:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:28.056 00:05:28.056 19:33:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.056 19:33:53 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:28.056 19:33:53 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:28.056 19:33:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.056 19:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.056 19:33:53 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:28.056 19:33:53 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:28.056 19:33:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.056 19:33:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.056 19:33:53 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.056 19:33:53 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:28.056 19:33:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.621 19:33:54 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:28.621 19:33:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.621 19:33:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.621 19:33:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.621 19:33:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.621 19:33:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.621 19:33:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.621 19:33:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:28.621 19:33:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.621 19:33:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:28.880 19:33:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.880 19:33:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:28.880 19:33:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.880 19:33:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:28.880 19:33:54 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.880 19:33:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.139 MallocForNvmf0 00:05:29.139 19:33:54 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.139 19:33:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.398 MallocForNvmf1 00:05:29.398 19:33:55 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.398 19:33:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.656 [2024-07-15 19:33:55.292270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.656 19:33:55 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.656 19:33:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.914 19:33:55 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.914 19:33:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.172 19:33:55 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.172 19:33:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.739 19:33:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.739 19:33:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.739 [2024-07-15 19:33:56.500876] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.997 19:33:56 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:30.997 19:33:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.997 19:33:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.997 19:33:56 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:30.997 19:33:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.997 19:33:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.997 19:33:56 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:30.997 19:33:56 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.997 19:33:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.256 MallocBdevForConfigChangeCheck 00:05:31.256 19:33:56 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:31.256 19:33:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.256 19:33:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.256 19:33:56 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:31.256 19:33:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.822 INFO: shutting down applications... 00:05:31.822 19:33:57 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:31.822 19:33:57 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:31.822 19:33:57 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:31.822 19:33:57 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:31.822 19:33:57 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:32.081 Calling clear_iscsi_subsystem 00:05:32.081 Calling clear_nvmf_subsystem 00:05:32.081 Calling clear_nbd_subsystem 00:05:32.081 Calling clear_ublk_subsystem 00:05:32.081 Calling clear_vhost_blk_subsystem 00:05:32.081 Calling clear_vhost_scsi_subsystem 00:05:32.081 Calling clear_bdev_subsystem 00:05:32.081 19:33:57 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:32.081 19:33:57 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:32.081 19:33:57 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:32.081 19:33:57 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:32.081 19:33:57 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.081 19:33:57 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:32.354 19:33:58 json_config -- json_config/json_config.sh@345 -- # break 00:05:32.354 19:33:58 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:32.354 19:33:58 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:32.354 19:33:58 json_config -- json_config/common.sh@31 -- # local app=target 00:05:32.354 19:33:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.354 19:33:58 json_config -- json_config/common.sh@35 -- # [[ -n 61185 ]] 00:05:32.354 19:33:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61185 00:05:32.354 19:33:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.354 19:33:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.354 19:33:58 json_config -- json_config/common.sh@41 -- # kill -0 61185 00:05:32.354 19:33:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.920 19:33:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.920 19:33:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.920 19:33:58 json_config -- json_config/common.sh@41 -- # kill -0 61185 00:05:32.920 19:33:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.920 19:33:58 json_config -- json_config/common.sh@43 -- # break 00:05:32.920 19:33:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.920 SPDK target shutdown done 00:05:32.920 19:33:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.920 INFO: relaunching applications... 00:05:32.920 19:33:58 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:32.920 19:33:58 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.920 19:33:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.920 19:33:58 json_config -- json_config/common.sh@10 -- # shift 00:05:32.920 19:33:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.920 19:33:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.920 19:33:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.920 19:33:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.920 19:33:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.920 19:33:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61465 00:05:32.920 19:33:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.920 Waiting for target to run... 00:05:32.920 19:33:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.920 19:33:58 json_config -- json_config/common.sh@25 -- # waitforlisten 61465 /var/tmp/spdk_tgt.sock 00:05:32.920 19:33:58 json_config -- common/autotest_common.sh@829 -- # '[' -z 61465 ']' 00:05:32.920 19:33:58 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.920 19:33:58 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.920 19:33:58 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.920 19:33:58 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.920 19:33:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.920 [2024-07-15 19:33:58.689890] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:32.920 [2024-07-15 19:33:58.690627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61465 ] 00:05:33.486 [2024-07-15 19:33:59.113920] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.486 [2024-07-15 19:33:59.211801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.052 [2024-07-15 19:33:59.542689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.052 [2024-07-15 19:33:59.574758] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.052 19:33:59 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.052 19:33:59 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:34.052 19:33:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:34.052 00:05:34.052 19:33:59 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:34.052 INFO: Checking if target configuration is the same... 00:05:34.052 19:33:59 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:34.052 19:33:59 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.052 19:33:59 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:34.052 19:33:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.052 + '[' 2 -ne 2 ']' 00:05:34.052 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:34.052 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:34.052 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:34.052 +++ basename /dev/fd/62 00:05:34.052 ++ mktemp /tmp/62.XXX 00:05:34.052 + tmp_file_1=/tmp/62.h4L 00:05:34.052 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.052 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:34.052 + tmp_file_2=/tmp/spdk_tgt_config.json.14B 00:05:34.052 + ret=0 00:05:34.052 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.619 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:34.619 + diff -u /tmp/62.h4L /tmp/spdk_tgt_config.json.14B 00:05:34.619 INFO: JSON config files are the same 00:05:34.619 + echo 'INFO: JSON config files are the same' 00:05:34.619 + rm /tmp/62.h4L /tmp/spdk_tgt_config.json.14B 00:05:34.619 + exit 0 00:05:34.619 19:34:00 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:34.619 19:34:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:34.619 INFO: changing configuration and checking if this can be detected... 00:05:34.619 19:34:00 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.619 19:34:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:34.877 19:34:00 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.877 19:34:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:34.877 19:34:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.877 + '[' 2 -ne 2 ']' 00:05:34.877 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:34.877 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:34.877 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:34.877 +++ basename /dev/fd/62 00:05:34.877 ++ mktemp /tmp/62.XXX 00:05:34.877 + tmp_file_1=/tmp/62.uwf 00:05:34.877 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.877 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:34.877 + tmp_file_2=/tmp/spdk_tgt_config.json.V5Q 00:05:34.877 + ret=0 00:05:34.877 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:35.444 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:35.444 + diff -u /tmp/62.uwf /tmp/spdk_tgt_config.json.V5Q 00:05:35.444 + ret=1 00:05:35.444 + echo '=== Start of file: /tmp/62.uwf ===' 00:05:35.444 + cat /tmp/62.uwf 00:05:35.444 + echo '=== End of file: /tmp/62.uwf ===' 00:05:35.444 + echo '' 00:05:35.444 + echo '=== Start of file: /tmp/spdk_tgt_config.json.V5Q ===' 00:05:35.444 + cat /tmp/spdk_tgt_config.json.V5Q 00:05:35.444 + echo '=== End of file: /tmp/spdk_tgt_config.json.V5Q ===' 00:05:35.444 + echo '' 00:05:35.444 + rm /tmp/62.uwf /tmp/spdk_tgt_config.json.V5Q 00:05:35.444 + exit 1 00:05:35.444 INFO: configuration change detected. 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@317 -- # [[ -n 61465 ]] 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.444 19:34:01 json_config -- json_config/json_config.sh@323 -- # killprocess 61465 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@948 -- # '[' -z 61465 ']' 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@952 -- # kill -0 61465 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@953 -- # uname 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61465 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.444 killing process with pid 61465 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61465' 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@967 -- # kill 61465 00:05:35.444 19:34:01 json_config -- common/autotest_common.sh@972 -- # wait 61465 00:05:35.703 19:34:01 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.703 19:34:01 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:35.703 19:34:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.703 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.703 19:34:01 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:35.703 INFO: Success 00:05:35.703 19:34:01 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:35.703 00:05:35.703 real 0m9.001s 00:05:35.703 user 0m13.128s 00:05:35.703 sys 0m1.947s 00:05:35.703 19:34:01 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.703 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.703 ************************************ 00:05:35.703 END TEST json_config 00:05:35.703 ************************************ 00:05:35.962 19:34:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.962 19:34:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.962 19:34:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.962 19:34:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.962 19:34:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.962 ************************************ 00:05:35.962 START TEST json_config_extra_key 00:05:35.962 ************************************ 00:05:35.962 19:34:01 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.962 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.962 19:34:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.962 19:34:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.962 19:34:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.962 19:34:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.962 19:34:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.962 19:34:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.962 19:34:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.962 19:34:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.962 19:34:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.963 19:34:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.963 INFO: launching applications... 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.963 19:34:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.963 Waiting for target to run... 00:05:35.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61641 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61641 /var/tmp/spdk_tgt.sock 00:05:35.963 19:34:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61641 ']' 00:05:35.963 19:34:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.963 19:34:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.963 19:34:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.963 19:34:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.963 19:34:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.963 19:34:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.963 [2024-07-15 19:34:01.665124] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:35.963 [2024-07-15 19:34:01.665453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61641 ] 00:05:36.530 [2024-07-15 19:34:02.084458] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.530 [2024-07-15 19:34:02.189461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.098 19:34:02 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.098 19:34:02 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.098 00:05:37.098 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.098 INFO: shutting down applications... 00:05:37.098 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61641 ]] 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61641 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61641 00:05:37.098 19:34:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.666 19:34:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.666 SPDK target shutdown done 00:05:37.666 19:34:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.666 19:34:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61641 00:05:37.666 19:34:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.666 19:34:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:37.666 19:34:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.666 19:34:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.666 Success 00:05:37.666 19:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:37.666 ************************************ 00:05:37.666 END TEST json_config_extra_key 00:05:37.666 ************************************ 00:05:37.666 00:05:37.666 real 0m1.698s 00:05:37.666 user 0m1.637s 00:05:37.666 sys 0m0.463s 00:05:37.666 19:34:03 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.666 19:34:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.666 19:34:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.666 19:34:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.666 19:34:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.666 19:34:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.666 19:34:03 -- common/autotest_common.sh@10 -- # set +x 00:05:37.666 ************************************ 00:05:37.666 START TEST alias_rpc 00:05:37.666 ************************************ 00:05:37.666 19:34:03 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.666 * Looking for test storage... 00:05:37.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:37.666 19:34:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.666 19:34:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61718 00:05:37.666 19:34:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.666 19:34:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61718 00:05:37.666 19:34:03 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61718 ']' 00:05:37.666 19:34:03 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.666 19:34:03 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.666 19:34:03 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.666 19:34:03 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.666 19:34:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.666 [2024-07-15 19:34:03.420527] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:37.666 [2024-07-15 19:34:03.421377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61718 ] 00:05:37.923 [2024-07-15 19:34:03.566329] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.181 [2024-07-15 19:34:03.705202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.746 19:34:04 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.746 19:34:04 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.746 19:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.005 19:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61718 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61718 ']' 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61718 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61718 00:05:39.005 killing process with pid 61718 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61718' 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@967 -- # kill 61718 00:05:39.005 19:34:04 alias_rpc -- common/autotest_common.sh@972 -- # wait 61718 00:05:39.573 ************************************ 00:05:39.573 END TEST alias_rpc 00:05:39.573 ************************************ 00:05:39.573 00:05:39.573 real 0m1.843s 00:05:39.573 user 0m2.042s 00:05:39.573 sys 0m0.508s 00:05:39.573 19:34:05 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.573 19:34:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.573 19:34:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.573 19:34:05 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:39.573 19:34:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.573 19:34:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.573 19:34:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.573 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.573 ************************************ 00:05:39.573 START TEST dpdk_mem_utility 00:05:39.573 ************************************ 00:05:39.573 19:34:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.573 * Looking for test storage... 00:05:39.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:39.573 19:34:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:39.573 19:34:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61810 00:05:39.573 19:34:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61810 00:05:39.573 19:34:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.573 19:34:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61810 ']' 00:05:39.573 19:34:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.573 19:34:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.573 19:34:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.573 19:34:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.573 19:34:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.573 [2024-07-15 19:34:05.314058] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:39.573 [2024-07-15 19:34:05.314202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61810 ] 00:05:39.830 [2024-07-15 19:34:05.452384] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.830 [2024-07-15 19:34:05.565373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.763 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.763 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:40.763 19:34:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.763 19:34:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.763 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.763 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.763 { 00:05:40.764 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.764 } 00:05:40.764 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.764 19:34:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:40.764 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:40.764 1 heaps totaling size 814.000000 MiB 00:05:40.764 size: 814.000000 MiB heap id: 0 00:05:40.764 end heaps---------- 00:05:40.764 8 mempools totaling size 598.116089 MiB 00:05:40.764 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.764 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.764 size: 84.521057 MiB name: bdev_io_61810 00:05:40.764 size: 51.011292 MiB name: evtpool_61810 00:05:40.764 size: 50.003479 MiB name: msgpool_61810 00:05:40.764 size: 21.763794 MiB name: PDU_Pool 00:05:40.764 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.764 size: 0.026123 MiB name: Session_Pool 00:05:40.764 end mempools------- 00:05:40.764 6 memzones totaling size 4.142822 MiB 00:05:40.764 size: 1.000366 MiB name: RG_ring_0_61810 00:05:40.764 size: 1.000366 MiB name: RG_ring_1_61810 00:05:40.764 size: 1.000366 MiB name: RG_ring_4_61810 00:05:40.764 size: 1.000366 MiB name: RG_ring_5_61810 00:05:40.764 size: 0.125366 MiB name: RG_ring_2_61810 00:05:40.764 size: 0.015991 MiB name: RG_ring_3_61810 00:05:40.764 end memzones------- 00:05:40.764 19:34:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.764 heap id: 0 total size: 814.000000 MiB number of busy elements: 219 number of free elements: 15 00:05:40.764 list of free elements. size: 12.486755 MiB 00:05:40.764 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:40.764 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:40.764 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:40.764 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:40.764 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:40.764 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:40.764 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:40.764 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:40.764 element at address: 0x200000200000 with size: 0.837036 MiB 00:05:40.764 element at address: 0x20001aa00000 with size: 0.572998 MiB 00:05:40.764 element at address: 0x20000b200000 with size: 0.489807 MiB 00:05:40.764 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:40.764 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:40.764 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:40.764 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:40.764 list of standard malloc elements. size: 199.250671 MiB 00:05:40.764 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:40.764 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:40.764 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:40.764 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:40.764 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:40.764 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.764 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:40.764 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.764 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:40.764 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:40.764 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:40.765 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:40.765 list of memzone associated elements. size: 602.262573 MiB 00:05:40.765 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:40.765 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.765 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:40.765 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.765 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:40.765 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61810_0 00:05:40.765 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:40.765 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61810_0 00:05:40.765 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:40.765 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61810_0 00:05:40.765 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:40.765 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.765 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:40.765 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.765 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:40.765 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61810 00:05:40.765 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:40.765 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61810 00:05:40.765 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.765 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61810 00:05:40.765 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:40.765 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.765 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:40.765 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.765 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:40.765 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.765 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:40.765 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.765 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:40.765 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61810 00:05:40.765 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:40.765 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61810 00:05:40.765 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:40.765 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61810 00:05:40.765 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:40.765 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61810 00:05:40.765 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:40.765 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61810 00:05:40.765 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:40.765 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.765 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:40.765 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.765 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:40.765 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.765 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:40.765 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61810 00:05:40.765 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:40.765 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.765 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:40.765 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.765 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:40.765 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61810 00:05:40.765 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:40.765 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.765 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:40.765 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61810 00:05:40.765 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:40.765 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61810 00:05:40.765 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:40.765 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.765 19:34:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.765 19:34:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61810 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61810 ']' 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61810 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61810 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.765 killing process with pid 61810 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61810' 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61810 00:05:40.765 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61810 00:05:41.329 00:05:41.329 real 0m1.708s 00:05:41.329 user 0m1.833s 00:05:41.329 sys 0m0.442s 00:05:41.329 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.329 19:34:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.329 ************************************ 00:05:41.329 END TEST dpdk_mem_utility 00:05:41.329 ************************************ 00:05:41.329 19:34:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.329 19:34:06 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:41.329 19:34:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.329 19:34:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.329 19:34:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.329 ************************************ 00:05:41.329 START TEST event 00:05:41.329 ************************************ 00:05:41.329 19:34:06 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:41.329 * Looking for test storage... 00:05:41.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:41.329 19:34:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:41.329 19:34:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:41.329 19:34:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.329 19:34:07 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:41.329 19:34:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.329 19:34:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.329 ************************************ 00:05:41.329 START TEST event_perf 00:05:41.329 ************************************ 00:05:41.329 19:34:07 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.329 Running I/O for 1 seconds...[2024-07-15 19:34:07.046140] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:41.329 [2024-07-15 19:34:07.046295] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61899 ] 00:05:41.585 [2024-07-15 19:34:07.185235] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.585 [2024-07-15 19:34:07.319487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.585 [2024-07-15 19:34:07.319722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.585 [2024-07-15 19:34:07.319810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.585 Running I/O for 1 seconds...[2024-07-15 19:34:07.320015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.955 00:05:42.955 lcore 0: 163486 00:05:42.955 lcore 1: 163486 00:05:42.955 lcore 2: 163487 00:05:42.955 lcore 3: 163486 00:05:42.955 done. 00:05:42.955 00:05:42.955 real 0m1.411s 00:05:42.955 user 0m4.220s 00:05:42.955 sys 0m0.065s 00:05:42.955 19:34:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.955 19:34:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.955 ************************************ 00:05:42.955 END TEST event_perf 00:05:42.955 ************************************ 00:05:42.955 19:34:08 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.955 19:34:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.955 19:34:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:42.955 19:34:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.955 19:34:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.955 ************************************ 00:05:42.955 START TEST event_reactor 00:05:42.955 ************************************ 00:05:42.955 19:34:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.955 [2024-07-15 19:34:08.507733] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:42.955 [2024-07-15 19:34:08.507829] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:05:42.956 [2024-07-15 19:34:08.641025] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.213 [2024-07-15 19:34:08.758121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.164 test_start 00:05:44.164 oneshot 00:05:44.164 tick 100 00:05:44.164 tick 100 00:05:44.164 tick 250 00:05:44.164 tick 100 00:05:44.164 tick 100 00:05:44.164 tick 250 00:05:44.164 tick 100 00:05:44.164 tick 500 00:05:44.164 tick 100 00:05:44.164 tick 100 00:05:44.164 tick 250 00:05:44.164 tick 100 00:05:44.164 tick 100 00:05:44.164 test_end 00:05:44.164 00:05:44.164 real 0m1.353s 00:05:44.164 user 0m1.189s 00:05:44.164 sys 0m0.056s 00:05:44.164 19:34:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.164 19:34:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:44.164 ************************************ 00:05:44.164 END TEST event_reactor 00:05:44.164 ************************************ 00:05:44.164 19:34:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:44.164 19:34:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.164 19:34:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:44.164 19:34:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.164 19:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.164 ************************************ 00:05:44.164 START TEST event_reactor_perf 00:05:44.164 ************************************ 00:05:44.164 19:34:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.164 [2024-07-15 19:34:09.915636] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:44.164 [2024-07-15 19:34:09.915736] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61973 ] 00:05:44.421 [2024-07-15 19:34:10.051031] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.421 [2024-07-15 19:34:10.170218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.793 test_start 00:05:45.793 test_end 00:05:45.793 Performance: 360016 events per second 00:05:45.793 00:05:45.793 real 0m1.363s 00:05:45.793 user 0m1.196s 00:05:45.793 sys 0m0.059s 00:05:45.793 19:34:11 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.793 19:34:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.793 ************************************ 00:05:45.793 END TEST event_reactor_perf 00:05:45.793 ************************************ 00:05:45.793 19:34:11 event -- common/autotest_common.sh@1142 -- # return 0 00:05:45.793 19:34:11 event -- event/event.sh@49 -- # uname -s 00:05:45.793 19:34:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.793 19:34:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:45.793 19:34:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.793 19:34:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.793 19:34:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.793 ************************************ 00:05:45.793 START TEST event_scheduler 00:05:45.793 ************************************ 00:05:45.793 19:34:11 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:45.793 * Looking for test storage... 00:05:45.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:45.793 19:34:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:45.793 19:34:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62035 00:05:45.793 19:34:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:45.793 19:34:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.793 19:34:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62035 00:05:45.793 19:34:11 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62035 ']' 00:05:45.793 19:34:11 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.793 19:34:11 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.793 19:34:11 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.793 19:34:11 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.793 19:34:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.793 [2024-07-15 19:34:11.449125] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:45.793 [2024-07-15 19:34:11.449279] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62035 ] 00:05:46.051 [2024-07-15 19:34:11.589971] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.051 [2024-07-15 19:34:11.705122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.051 [2024-07-15 19:34:11.705306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.051 [2024-07-15 19:34:11.705361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.051 [2024-07-15 19:34:11.705368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.988 19:34:12 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.988 19:34:12 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:46.988 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.988 19:34:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.988 19:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.988 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.988 POWER: Cannot set governor of lcore 0 to performance 00:05:46.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.988 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.988 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.988 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:46.988 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:46.988 POWER: Unable to set Power Management Environment for lcore 0 00:05:46.988 [2024-07-15 19:34:12.503613] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:46.988 [2024-07-15 19:34:12.503630] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:46.988 [2024-07-15 19:34:12.503639] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:46.988 [2024-07-15 19:34:12.503651] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:46.988 [2024-07-15 19:34:12.503659] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:46.988 [2024-07-15 19:34:12.503666] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:46.988 19:34:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.988 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:46.988 19:34:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 [2024-07-15 19:34:12.599896] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:46.989 19:34:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:46.989 19:34:12 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.989 19:34:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 ************************************ 00:05:46.989 START TEST scheduler_create_thread 00:05:46.989 ************************************ 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 2 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 3 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 4 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 5 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 6 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 7 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 8 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 9 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 10 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.989 19:34:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.891 19:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.891 19:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.891 19:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.891 19:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.891 19:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.456 ************************************ 00:05:49.456 END TEST scheduler_create_thread 00:05:49.456 ************************************ 00:05:49.456 19:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.456 00:05:49.456 real 0m2.614s 00:05:49.456 user 0m0.020s 00:05:49.456 sys 0m0.005s 00:05:49.456 19:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.456 19:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:49.715 19:34:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.715 19:34:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62035 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62035 ']' 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62035 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62035 00:05:49.715 killing process with pid 62035 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62035' 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62035 00:05:49.715 19:34:15 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62035 00:05:49.973 [2024-07-15 19:34:15.704645] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.233 00:05:50.233 real 0m4.640s 00:05:50.233 user 0m8.937s 00:05:50.233 sys 0m0.395s 00:05:50.233 19:34:15 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.233 19:34:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.233 ************************************ 00:05:50.233 END TEST event_scheduler 00:05:50.233 ************************************ 00:05:50.233 19:34:15 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.233 19:34:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.233 19:34:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.234 19:34:15 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.234 19:34:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.234 19:34:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.234 ************************************ 00:05:50.234 START TEST app_repeat 00:05:50.234 ************************************ 00:05:50.234 19:34:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:50.234 19:34:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.234 19:34:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.234 19:34:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:50.234 19:34:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.234 19:34:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:50.234 19:34:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:50.234 19:34:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:50.492 19:34:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62152 00:05:50.492 19:34:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.492 19:34:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.492 Process app_repeat pid: 62152 00:05:50.492 19:34:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62152' 00:05:50.492 19:34:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.492 spdk_app_start Round 0 00:05:50.492 19:34:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.492 19:34:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62152 /var/tmp/spdk-nbd.sock 00:05:50.492 19:34:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62152 ']' 00:05:50.492 19:34:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.492 19:34:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.492 19:34:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.492 19:34:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.492 19:34:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.492 [2024-07-15 19:34:16.048261] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:05:50.492 [2024-07-15 19:34:16.048389] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62152 ] 00:05:50.492 [2024-07-15 19:34:16.187318] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.750 [2024-07-15 19:34:16.308604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.750 [2024-07-15 19:34:16.308652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.334 19:34:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.334 19:34:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:51.334 19:34:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.592 Malloc0 00:05:51.592 19:34:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.850 Malloc1 00:05:51.850 19:34:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.850 19:34:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.108 /dev/nbd0 00:05:52.108 19:34:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.108 19:34:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.108 1+0 records in 00:05:52.108 1+0 records out 00:05:52.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456348 s, 9.0 MB/s 00:05:52.108 19:34:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.366 19:34:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.366 19:34:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.366 19:34:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.366 19:34:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.366 19:34:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.366 19:34:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.366 19:34:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.625 /dev/nbd1 00:05:52.625 19:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.625 19:34:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.625 1+0 records in 00:05:52.625 1+0 records out 00:05:52.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282119 s, 14.5 MB/s 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.625 19:34:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.625 19:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.625 19:34:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.625 19:34:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.625 19:34:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.625 19:34:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.884 { 00:05:52.884 "bdev_name": "Malloc0", 00:05:52.884 "nbd_device": "/dev/nbd0" 00:05:52.884 }, 00:05:52.884 { 00:05:52.884 "bdev_name": "Malloc1", 00:05:52.884 "nbd_device": "/dev/nbd1" 00:05:52.884 } 00:05:52.884 ]' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.884 { 00:05:52.884 "bdev_name": "Malloc0", 00:05:52.884 "nbd_device": "/dev/nbd0" 00:05:52.884 }, 00:05:52.884 { 00:05:52.884 "bdev_name": "Malloc1", 00:05:52.884 "nbd_device": "/dev/nbd1" 00:05:52.884 } 00:05:52.884 ]' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.884 /dev/nbd1' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.884 /dev/nbd1' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.884 256+0 records in 00:05:52.884 256+0 records out 00:05:52.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650849 s, 161 MB/s 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.884 256+0 records in 00:05:52.884 256+0 records out 00:05:52.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261772 s, 40.1 MB/s 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.884 256+0 records in 00:05:52.884 256+0 records out 00:05:52.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314735 s, 33.3 MB/s 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.884 19:34:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.143 19:34:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.402 19:34:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.402 19:34:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.402 19:34:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.402 19:34:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.660 19:34:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.919 19:34:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.919 19:34:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.178 19:34:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.437 [2024-07-15 19:34:20.053728] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.438 [2024-07-15 19:34:20.164545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.438 [2024-07-15 19:34:20.164556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.697 [2024-07-15 19:34:20.224669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.697 [2024-07-15 19:34:20.224748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.229 19:34:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.229 spdk_app_start Round 1 00:05:57.229 19:34:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:57.229 19:34:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62152 /var/tmp/spdk-nbd.sock 00:05:57.229 19:34:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62152 ']' 00:05:57.229 19:34:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.229 19:34:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.229 19:34:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.229 19:34:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.229 19:34:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.487 19:34:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.487 19:34:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.487 19:34:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.745 Malloc0 00:05:57.745 19:34:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.003 Malloc1 00:05:58.003 19:34:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.003 19:34:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.260 /dev/nbd0 00:05:58.260 19:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.260 19:34:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.260 1+0 records in 00:05:58.260 1+0 records out 00:05:58.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390762 s, 10.5 MB/s 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.260 19:34:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.260 19:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.260 19:34:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.260 19:34:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.519 /dev/nbd1 00:05:58.519 19:34:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.519 19:34:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.519 1+0 records in 00:05:58.519 1+0 records out 00:05:58.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441525 s, 9.3 MB/s 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.519 19:34:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.519 19:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.519 19:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.519 19:34:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.519 19:34:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.519 19:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.837 { 00:05:58.837 "bdev_name": "Malloc0", 00:05:58.837 "nbd_device": "/dev/nbd0" 00:05:58.837 }, 00:05:58.837 { 00:05:58.837 "bdev_name": "Malloc1", 00:05:58.837 "nbd_device": "/dev/nbd1" 00:05:58.837 } 00:05:58.837 ]' 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.837 { 00:05:58.837 "bdev_name": "Malloc0", 00:05:58.837 "nbd_device": "/dev/nbd0" 00:05:58.837 }, 00:05:58.837 { 00:05:58.837 "bdev_name": "Malloc1", 00:05:58.837 "nbd_device": "/dev/nbd1" 00:05:58.837 } 00:05:58.837 ]' 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.837 /dev/nbd1' 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.837 /dev/nbd1' 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.837 256+0 records in 00:05:58.837 256+0 records out 00:05:58.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00761552 s, 138 MB/s 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.837 256+0 records in 00:05:58.837 256+0 records out 00:05:58.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252701 s, 41.5 MB/s 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.837 19:34:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.095 256+0 records in 00:05:59.095 256+0 records out 00:05:59.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283806 s, 36.9 MB/s 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.095 19:34:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.096 19:34:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.356 19:34:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.614 19:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.872 19:34:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.872 19:34:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.130 19:34:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.388 [2024-07-15 19:34:26.023242] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.388 [2024-07-15 19:34:26.140071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.388 [2024-07-15 19:34:26.140083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.646 [2024-07-15 19:34:26.198846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.646 [2024-07-15 19:34:26.198908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.197 spdk_app_start Round 2 00:06:03.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.197 19:34:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.197 19:34:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:03.197 19:34:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62152 /var/tmp/spdk-nbd.sock 00:06:03.197 19:34:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62152 ']' 00:06:03.197 19:34:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.197 19:34:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.197 19:34:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.197 19:34:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.197 19:34:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.456 19:34:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.456 19:34:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:03.456 19:34:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.715 Malloc0 00:06:03.715 19:34:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.974 Malloc1 00:06:03.974 19:34:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.974 19:34:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.232 /dev/nbd0 00:06:04.232 19:34:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.232 19:34:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.232 19:34:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.232 19:34:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.232 19:34:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.232 19:34:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.232 19:34:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.232 19:34:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.233 1+0 records in 00:06:04.233 1+0 records out 00:06:04.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282359 s, 14.5 MB/s 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.233 19:34:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.233 19:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.233 19:34:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.233 19:34:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.491 /dev/nbd1 00:06:04.747 19:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.747 19:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.747 19:34:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.748 1+0 records in 00:06:04.748 1+0 records out 00:06:04.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321844 s, 12.7 MB/s 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.748 19:34:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.748 19:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.748 19:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.748 19:34:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.748 19:34:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.748 19:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.004 { 00:06:05.004 "bdev_name": "Malloc0", 00:06:05.004 "nbd_device": "/dev/nbd0" 00:06:05.004 }, 00:06:05.004 { 00:06:05.004 "bdev_name": "Malloc1", 00:06:05.004 "nbd_device": "/dev/nbd1" 00:06:05.004 } 00:06:05.004 ]' 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.004 { 00:06:05.004 "bdev_name": "Malloc0", 00:06:05.004 "nbd_device": "/dev/nbd0" 00:06:05.004 }, 00:06:05.004 { 00:06:05.004 "bdev_name": "Malloc1", 00:06:05.004 "nbd_device": "/dev/nbd1" 00:06:05.004 } 00:06:05.004 ]' 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.004 /dev/nbd1' 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.004 /dev/nbd1' 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.004 19:34:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.005 256+0 records in 00:06:05.005 256+0 records out 00:06:05.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00884528 s, 119 MB/s 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.005 256+0 records in 00:06:05.005 256+0 records out 00:06:05.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260205 s, 40.3 MB/s 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.005 256+0 records in 00:06:05.005 256+0 records out 00:06:05.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02999 s, 35.0 MB/s 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.005 19:34:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.262 19:34:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.537 19:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.794 19:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.794 19:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.794 19:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.051 19:34:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.051 19:34:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.322 19:34:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.578 [2024-07-15 19:34:32.149213] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.578 [2024-07-15 19:34:32.278827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.578 [2024-07-15 19:34:32.278839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.578 [2024-07-15 19:34:32.337396] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.578 [2024-07-15 19:34:32.337489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.856 19:34:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62152 /var/tmp/spdk-nbd.sock 00:06:09.856 19:34:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62152 ']' 00:06:09.856 19:34:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.856 19:34:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.856 19:34:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.856 19:34:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.856 19:34:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.856 19:34:35 event.app_repeat -- event/event.sh@39 -- # killprocess 62152 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62152 ']' 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62152 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62152 00:06:09.856 killing process with pid 62152 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62152' 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62152 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62152 00:06:09.856 spdk_app_start is called in Round 0. 00:06:09.856 Shutdown signal received, stop current app iteration 00:06:09.856 Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 reinitialization... 00:06:09.856 spdk_app_start is called in Round 1. 00:06:09.856 Shutdown signal received, stop current app iteration 00:06:09.856 Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 reinitialization... 00:06:09.856 spdk_app_start is called in Round 2. 00:06:09.856 Shutdown signal received, stop current app iteration 00:06:09.856 Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 reinitialization... 00:06:09.856 spdk_app_start is called in Round 3. 00:06:09.856 Shutdown signal received, stop current app iteration 00:06:09.856 19:34:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.856 19:34:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.856 00:06:09.856 real 0m19.476s 00:06:09.856 user 0m43.488s 00:06:09.856 sys 0m3.248s 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.856 19:34:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.856 ************************************ 00:06:09.856 END TEST app_repeat 00:06:09.856 ************************************ 00:06:09.856 19:34:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.856 19:34:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.856 19:34:35 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:09.856 19:34:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.856 19:34:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.856 19:34:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.856 ************************************ 00:06:09.856 START TEST cpu_locks 00:06:09.856 ************************************ 00:06:09.856 19:34:35 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:09.856 * Looking for test storage... 00:06:09.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.856 19:34:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.856 19:34:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.856 19:34:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.856 19:34:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.856 19:34:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.856 19:34:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.856 19:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.856 ************************************ 00:06:09.856 START TEST default_locks 00:06:09.856 ************************************ 00:06:09.856 19:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:09.856 19:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62778 00:06:09.856 19:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62778 00:06:09.856 19:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62778 ']' 00:06:09.856 19:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.856 19:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.114 19:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.114 19:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.114 19:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.114 19:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.114 [2024-07-15 19:34:35.696787] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:10.114 [2024-07-15 19:34:35.696893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62778 ] 00:06:10.114 [2024-07-15 19:34:35.832389] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.372 [2024-07-15 19:34:35.963404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.938 19:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.938 19:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:10.938 19:34:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62778 00:06:10.938 19:34:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.938 19:34:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62778 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62778 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62778 ']' 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62778 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62778 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.504 killing process with pid 62778 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62778' 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62778 00:06:11.504 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62778 00:06:12.070 19:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62778 00:06:12.070 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:12.070 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62778 00:06:12.070 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:12.070 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.070 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:12.070 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62778 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62778 ']' 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.071 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62778) - No such process 00:06:12.071 ERROR: process (pid: 62778) is no longer running 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.071 00:06:12.071 real 0m1.962s 00:06:12.071 user 0m2.151s 00:06:12.071 sys 0m0.547s 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.071 ************************************ 00:06:12.071 END TEST default_locks 00:06:12.071 19:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.071 ************************************ 00:06:12.071 19:34:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.071 19:34:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:12.071 19:34:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.071 19:34:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.071 19:34:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.071 ************************************ 00:06:12.071 START TEST default_locks_via_rpc 00:06:12.071 ************************************ 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62842 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62842 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62842 ']' 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.071 19:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.071 [2024-07-15 19:34:37.706000] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:12.071 [2024-07-15 19:34:37.706124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62842 ] 00:06:12.071 [2024-07-15 19:34:37.844680] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.329 [2024-07-15 19:34:38.003608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62842 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62842 00:06:13.264 19:34:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.521 19:34:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62842 00:06:13.521 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62842 ']' 00:06:13.521 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62842 00:06:13.521 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:13.521 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.521 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62842 00:06:13.521 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.522 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.522 killing process with pid 62842 00:06:13.522 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62842' 00:06:13.522 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62842 00:06:13.522 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62842 00:06:14.088 00:06:14.088 real 0m2.173s 00:06:14.088 user 0m2.195s 00:06:14.088 sys 0m0.687s 00:06:14.088 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.088 19:34:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.088 ************************************ 00:06:14.088 END TEST default_locks_via_rpc 00:06:14.088 ************************************ 00:06:14.088 19:34:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:14.088 19:34:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:14.088 19:34:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.088 19:34:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.088 19:34:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.346 ************************************ 00:06:14.346 START TEST non_locking_app_on_locked_coremask 00:06:14.346 ************************************ 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62911 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62911 /var/tmp/spdk.sock 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62911 ']' 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.346 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.346 [2024-07-15 19:34:39.944339] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:14.346 [2024-07-15 19:34:39.944442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62911 ] 00:06:14.346 [2024-07-15 19:34:40.079657] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.604 [2024-07-15 19:34:40.245897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62945 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62945 /var/tmp/spdk2.sock 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62945 ']' 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.533 19:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.533 [2024-07-15 19:34:41.051347] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:15.533 [2024-07-15 19:34:41.051445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62945 ] 00:06:15.533 [2024-07-15 19:34:41.194798] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.533 [2024-07-15 19:34:41.194887] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.791 [2024-07-15 19:34:41.456887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.385 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.385 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.385 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62911 00:06:16.385 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62911 00:06:16.385 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62911 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62911 ']' 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62911 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62911 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62911' 00:06:16.950 killing process with pid 62911 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62911 00:06:16.950 19:34:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62911 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62945 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62945 ']' 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62945 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62945 00:06:17.885 killing process with pid 62945 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62945' 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62945 00:06:17.885 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62945 00:06:18.450 ************************************ 00:06:18.450 END TEST non_locking_app_on_locked_coremask 00:06:18.450 ************************************ 00:06:18.450 00:06:18.450 real 0m4.117s 00:06:18.450 user 0m4.441s 00:06:18.450 sys 0m1.213s 00:06:18.450 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.450 19:34:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.450 19:34:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.450 19:34:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.450 19:34:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.450 19:34:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.450 19:34:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.450 ************************************ 00:06:18.450 START TEST locking_app_on_unlocked_coremask 00:06:18.450 ************************************ 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63024 00:06:18.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63024 /var/tmp/spdk.sock 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63024 ']' 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.450 19:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.450 [2024-07-15 19:34:44.115697] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:18.450 [2024-07-15 19:34:44.115790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63024 ] 00:06:18.708 [2024-07-15 19:34:44.256942] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.708 [2024-07-15 19:34:44.257008] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.708 [2024-07-15 19:34:44.378996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63054 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63054 /var/tmp/spdk2.sock 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63054 ']' 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.644 19:34:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.644 [2024-07-15 19:34:45.249397] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:19.644 [2024-07-15 19:34:45.249493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63054 ] 00:06:19.644 [2024-07-15 19:34:45.394946] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.902 [2024-07-15 19:34:45.596958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.468 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.468 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:20.469 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63054 00:06:20.469 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63054 00:06:20.469 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63024 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63024 ']' 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63024 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63024 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63024' 00:06:21.400 killing process with pid 63024 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63024 00:06:21.400 19:34:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63024 00:06:22.335 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63054 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63054 ']' 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63054 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63054 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.336 killing process with pid 63054 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63054' 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63054 00:06:22.336 19:34:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63054 00:06:22.594 00:06:22.594 real 0m4.177s 00:06:22.594 user 0m4.655s 00:06:22.594 sys 0m1.171s 00:06:22.594 ************************************ 00:06:22.594 END TEST locking_app_on_unlocked_coremask 00:06:22.594 ************************************ 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.594 19:34:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.594 19:34:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:22.594 19:34:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.594 19:34:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.594 19:34:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.594 ************************************ 00:06:22.594 START TEST locking_app_on_locked_coremask 00:06:22.594 ************************************ 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63131 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63131 /var/tmp/spdk.sock 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63131 ']' 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.594 19:34:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.594 [2024-07-15 19:34:48.342692] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:22.594 [2024-07-15 19:34:48.342808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63131 ] 00:06:22.852 [2024-07-15 19:34:48.484464] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.852 [2024-07-15 19:34:48.625831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63159 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63159 /var/tmp/spdk2.sock 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63159 /var/tmp/spdk2.sock 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63159 /var/tmp/spdk2.sock 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63159 ']' 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.785 19:34:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.785 [2024-07-15 19:34:49.397792] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:23.785 [2024-07-15 19:34:49.397882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63159 ] 00:06:23.785 [2024-07-15 19:34:49.539916] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63131 has claimed it. 00:06:23.785 [2024-07-15 19:34:49.540007] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.350 ERROR: process (pid: 63159) is no longer running 00:06:24.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63159) - No such process 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63131 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63131 00:06:24.350 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63131 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63131 ']' 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63131 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63131 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.914 killing process with pid 63131 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63131' 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63131 00:06:24.914 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63131 00:06:25.171 00:06:25.171 real 0m2.585s 00:06:25.171 user 0m2.979s 00:06:25.171 sys 0m0.612s 00:06:25.171 ************************************ 00:06:25.171 END TEST locking_app_on_locked_coremask 00:06:25.171 ************************************ 00:06:25.171 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.171 19:34:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.171 19:34:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:25.171 19:34:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.171 19:34:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.171 19:34:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.171 19:34:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.171 ************************************ 00:06:25.171 START TEST locking_overlapped_coremask 00:06:25.171 ************************************ 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63210 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63210 /var/tmp/spdk.sock 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63210 ']' 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.171 19:34:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.427 [2024-07-15 19:34:50.986219] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:25.427 [2024-07-15 19:34:50.986381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63210 ] 00:06:25.427 [2024-07-15 19:34:51.127247] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.684 [2024-07-15 19:34:51.248179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.684 [2024-07-15 19:34:51.248277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.684 [2024-07-15 19:34:51.248283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63240 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63240 /var/tmp/spdk2.sock 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63240 /var/tmp/spdk2.sock 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63240 /var/tmp/spdk2.sock 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63240 ']' 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.249 19:34:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.508 [2024-07-15 19:34:52.040388] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:26.508 [2024-07-15 19:34:52.040947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63240 ] 00:06:26.508 [2024-07-15 19:34:52.183821] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63210 has claimed it. 00:06:26.508 [2024-07-15 19:34:52.183897] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.105 ERROR: process (pid: 63240) is no longer running 00:06:27.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63240) - No such process 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.105 19:34:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63210 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63210 ']' 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63210 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63210 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.106 killing process with pid 63210 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63210' 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63210 00:06:27.106 19:34:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63210 00:06:27.688 00:06:27.688 real 0m2.265s 00:06:27.688 user 0m6.255s 00:06:27.688 sys 0m0.441s 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.688 ************************************ 00:06:27.688 END TEST locking_overlapped_coremask 00:06:27.688 ************************************ 00:06:27.688 19:34:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.688 19:34:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.688 19:34:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.688 19:34:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.688 19:34:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.688 ************************************ 00:06:27.688 START TEST locking_overlapped_coremask_via_rpc 00:06:27.688 ************************************ 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63292 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63292 /var/tmp/spdk.sock 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63292 ']' 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.688 19:34:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.688 [2024-07-15 19:34:53.315179] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:27.688 [2024-07-15 19:34:53.315929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63292 ] 00:06:27.688 [2024-07-15 19:34:53.459645] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.688 [2024-07-15 19:34:53.459874] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.948 [2024-07-15 19:34:53.582232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.948 [2024-07-15 19:34:53.582279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.948 [2024-07-15 19:34:53.582281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63316 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63316 /var/tmp/spdk2.sock 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63316 ']' 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.514 19:34:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.772 [2024-07-15 19:34:54.311632] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:28.772 [2024-07-15 19:34:54.311767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63316 ] 00:06:28.772 [2024-07-15 19:34:54.457596] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.772 [2024-07-15 19:34:54.457643] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.030 [2024-07-15 19:34:54.691341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.030 [2024-07-15 19:34:54.691466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.030 [2024-07-15 19:34:54.691466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.596 [2024-07-15 19:34:55.309279] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63292 has claimed it. 00:06:29.596 2024/07/15 19:34:55 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:29.596 request: 00:06:29.596 { 00:06:29.596 "method": "framework_enable_cpumask_locks", 00:06:29.596 "params": {} 00:06:29.596 } 00:06:29.596 Got JSON-RPC error response 00:06:29.596 GoRPCClient: error on JSON-RPC call 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63292 /var/tmp/spdk.sock 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63292 ']' 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.596 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63316 /var/tmp/spdk2.sock 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63316 ']' 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.855 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.114 00:06:30.114 real 0m2.619s 00:06:30.114 user 0m1.334s 00:06:30.114 sys 0m0.216s 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.114 ************************************ 00:06:30.114 END TEST locking_overlapped_coremask_via_rpc 00:06:30.114 19:34:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.114 ************************************ 00:06:30.114 19:34:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:30.114 19:34:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.114 19:34:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63292 ]] 00:06:30.114 19:34:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63292 00:06:30.114 19:34:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63292 ']' 00:06:30.114 19:34:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63292 00:06:30.114 19:34:55 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:30.114 19:34:55 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.114 19:34:55 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63292 00:06:30.396 19:34:55 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.396 19:34:55 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.396 killing process with pid 63292 00:06:30.396 19:34:55 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63292' 00:06:30.396 19:34:55 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63292 00:06:30.396 19:34:55 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63292 00:06:30.656 19:34:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63316 ]] 00:06:30.656 19:34:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63316 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63316 ']' 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63316 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63316 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63316' 00:06:30.656 killing process with pid 63316 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63316 00:06:30.656 19:34:56 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63316 00:06:31.222 19:34:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.222 19:34:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:31.222 19:34:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63292 ]] 00:06:31.222 19:34:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63292 00:06:31.222 19:34:56 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63292 ']' 00:06:31.222 19:34:56 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63292 00:06:31.222 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63292) - No such process 00:06:31.222 Process with pid 63292 is not found 00:06:31.222 19:34:56 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63292 is not found' 00:06:31.223 19:34:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63316 ]] 00:06:31.223 19:34:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63316 00:06:31.223 19:34:56 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63316 ']' 00:06:31.223 19:34:56 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63316 00:06:31.223 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63316) - No such process 00:06:31.223 Process with pid 63316 is not found 00:06:31.223 19:34:56 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63316 is not found' 00:06:31.223 19:34:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.223 00:06:31.223 real 0m21.186s 00:06:31.223 user 0m36.213s 00:06:31.223 sys 0m5.721s 00:06:31.223 19:34:56 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.223 19:34:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.223 ************************************ 00:06:31.223 END TEST cpu_locks 00:06:31.223 ************************************ 00:06:31.223 19:34:56 event -- common/autotest_common.sh@1142 -- # return 0 00:06:31.223 00:06:31.223 real 0m49.840s 00:06:31.223 user 1m35.369s 00:06:31.223 sys 0m9.808s 00:06:31.223 19:34:56 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.223 19:34:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.223 ************************************ 00:06:31.223 END TEST event 00:06:31.223 ************************************ 00:06:31.223 19:34:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.223 19:34:56 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.223 19:34:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.223 19:34:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.223 19:34:56 -- common/autotest_common.sh@10 -- # set +x 00:06:31.223 ************************************ 00:06:31.223 START TEST thread 00:06:31.223 ************************************ 00:06:31.223 19:34:56 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.223 * Looking for test storage... 00:06:31.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:31.223 19:34:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.223 19:34:56 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:31.223 19:34:56 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.223 19:34:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.223 ************************************ 00:06:31.223 START TEST thread_poller_perf 00:06:31.223 ************************************ 00:06:31.223 19:34:56 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.223 [2024-07-15 19:34:56.928647] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:31.223 [2024-07-15 19:34:56.928747] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63468 ] 00:06:31.481 [2024-07-15 19:34:57.063762] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.481 [2024-07-15 19:34:57.173543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.481 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.856 ====================================== 00:06:32.856 busy:2206736593 (cyc) 00:06:32.856 total_run_count: 314000 00:06:32.856 tsc_hz: 2200000000 (cyc) 00:06:32.856 ====================================== 00:06:32.856 poller_cost: 7027 (cyc), 3194 (nsec) 00:06:32.856 ************************************ 00:06:32.856 END TEST thread_poller_perf 00:06:32.856 ************************************ 00:06:32.856 00:06:32.856 real 0m1.352s 00:06:32.856 user 0m1.194s 00:06:32.856 sys 0m0.051s 00:06:32.856 19:34:58 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.856 19:34:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.856 19:34:58 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:32.856 19:34:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.856 19:34:58 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:32.856 19:34:58 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.856 19:34:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.856 ************************************ 00:06:32.856 START TEST thread_poller_perf 00:06:32.856 ************************************ 00:06:32.856 19:34:58 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.856 [2024-07-15 19:34:58.330556] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:32.856 [2024-07-15 19:34:58.330667] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63503 ] 00:06:32.857 [2024-07-15 19:34:58.470792] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.857 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.857 [2024-07-15 19:34:58.580855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.259 ====================================== 00:06:34.259 busy:2202045481 (cyc) 00:06:34.259 total_run_count: 4223000 00:06:34.259 tsc_hz: 2200000000 (cyc) 00:06:34.259 ====================================== 00:06:34.259 poller_cost: 521 (cyc), 236 (nsec) 00:06:34.259 00:06:34.259 real 0m1.356s 00:06:34.259 user 0m1.190s 00:06:34.259 sys 0m0.057s 00:06:34.259 19:34:59 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.259 19:34:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.259 ************************************ 00:06:34.259 END TEST thread_poller_perf 00:06:34.259 ************************************ 00:06:34.259 19:34:59 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:34.259 19:34:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:34.259 00:06:34.259 real 0m2.896s 00:06:34.259 user 0m2.454s 00:06:34.259 sys 0m0.222s 00:06:34.259 19:34:59 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.259 19:34:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.259 ************************************ 00:06:34.259 END TEST thread 00:06:34.259 ************************************ 00:06:34.259 19:34:59 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.259 19:34:59 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:34.259 19:34:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.259 19:34:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.259 19:34:59 -- common/autotest_common.sh@10 -- # set +x 00:06:34.259 ************************************ 00:06:34.259 START TEST accel 00:06:34.259 ************************************ 00:06:34.259 19:34:59 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:34.259 * Looking for test storage... 00:06:34.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:34.259 19:34:59 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:34.259 19:34:59 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:34.259 19:34:59 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.259 19:34:59 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63574 00:06:34.259 19:34:59 accel -- accel/accel.sh@63 -- # waitforlisten 63574 00:06:34.259 19:34:59 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:34.259 19:34:59 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:34.259 19:34:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.259 19:34:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.259 19:34:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.259 19:34:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.259 19:34:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.259 19:34:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:34.259 19:34:59 accel -- accel/accel.sh@41 -- # jq -r . 00:06:34.259 19:34:59 accel -- common/autotest_common.sh@829 -- # '[' -z 63574 ']' 00:06:34.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.259 19:34:59 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.259 19:34:59 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.259 19:34:59 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.259 19:34:59 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.259 19:34:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.259 [2024-07-15 19:34:59.907668] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:34.259 [2024-07-15 19:34:59.907771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63574 ] 00:06:34.518 [2024-07-15 19:35:00.047896] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.518 [2024-07-15 19:35:00.170533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@862 -- # return 0 00:06:35.453 19:35:00 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:35.453 19:35:00 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:35.453 19:35:00 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:35.453 19:35:00 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:35.453 19:35:00 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:35.453 19:35:00 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.453 19:35:00 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # IFS== 00:06:35.453 19:35:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:35.453 19:35:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:35.453 19:35:00 accel -- accel/accel.sh@75 -- # killprocess 63574 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@948 -- # '[' -z 63574 ']' 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@952 -- # kill -0 63574 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@953 -- # uname 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63574 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.453 killing process with pid 63574 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63574' 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@967 -- # kill 63574 00:06:35.453 19:35:00 accel -- common/autotest_common.sh@972 -- # wait 63574 00:06:35.711 19:35:01 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:35.712 19:35:01 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:35.712 19:35:01 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:35.712 19:35:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.712 19:35:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.712 19:35:01 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:35.712 19:35:01 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:35.712 19:35:01 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.712 19:35:01 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:35.712 19:35:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.712 19:35:01 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:35.712 19:35:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:35.712 19:35:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.712 19:35:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.712 ************************************ 00:06:35.712 START TEST accel_missing_filename 00:06:35.712 ************************************ 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.712 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:35.712 19:35:01 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:35.969 [2024-07-15 19:35:01.503663] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:35.969 [2024-07-15 19:35:01.504274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63648 ] 00:06:35.969 [2024-07-15 19:35:01.643103] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.227 [2024-07-15 19:35:01.756985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.227 [2024-07-15 19:35:01.813166] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.227 [2024-07-15 19:35:01.892534] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:36.227 A filename is required. 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.227 00:06:36.227 real 0m0.519s 00:06:36.227 user 0m0.342s 00:06:36.227 sys 0m0.120s 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.227 19:35:01 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:36.227 ************************************ 00:06:36.227 END TEST accel_missing_filename 00:06:36.227 ************************************ 00:06:36.486 19:35:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.486 19:35:02 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:36.486 19:35:02 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:36.486 19:35:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.486 19:35:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.486 ************************************ 00:06:36.486 START TEST accel_compress_verify 00:06:36.486 ************************************ 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:36.486 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:36.486 19:35:02 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:36.486 [2024-07-15 19:35:02.074837] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:36.486 [2024-07-15 19:35:02.074949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63672 ] 00:06:36.486 [2024-07-15 19:35:02.213322] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.745 [2024-07-15 19:35:02.337555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.745 [2024-07-15 19:35:02.397784] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.745 [2024-07-15 19:35:02.478114] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:37.004 00:06:37.004 Compression does not support the verify option, aborting. 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.004 00:06:37.004 real 0m0.513s 00:06:37.004 user 0m0.334s 00:06:37.004 sys 0m0.122s 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.004 19:35:02 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:37.004 ************************************ 00:06:37.004 END TEST accel_compress_verify 00:06:37.004 ************************************ 00:06:37.004 19:35:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.004 19:35:02 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:37.004 19:35:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:37.004 19:35:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.004 19:35:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.004 ************************************ 00:06:37.004 START TEST accel_wrong_workload 00:06:37.004 ************************************ 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.004 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:37.004 19:35:02 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:37.004 Unsupported workload type: foobar 00:06:37.004 [2024-07-15 19:35:02.638386] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:37.004 accel_perf options: 00:06:37.005 [-h help message] 00:06:37.005 [-q queue depth per core] 00:06:37.005 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.005 [-T number of threads per core 00:06:37.005 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.005 [-t time in seconds] 00:06:37.005 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.005 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:37.005 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.005 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.005 [-S for crc32c workload, use this seed value (default 0) 00:06:37.005 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.005 [-f for fill workload, use this BYTE value (default 255) 00:06:37.005 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.005 [-y verify result if this switch is on] 00:06:37.005 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.005 Can be used to spread operations across a wider range of memory. 00:06:37.005 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:37.005 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.005 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.005 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.005 00:06:37.005 real 0m0.029s 00:06:37.005 user 0m0.017s 00:06:37.005 sys 0m0.012s 00:06:37.005 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.005 19:35:02 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:37.005 ************************************ 00:06:37.005 END TEST accel_wrong_workload 00:06:37.005 ************************************ 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.005 19:35:02 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.005 ************************************ 00:06:37.005 START TEST accel_negative_buffers 00:06:37.005 ************************************ 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:37.005 19:35:02 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:37.005 -x option must be non-negative. 00:06:37.005 [2024-07-15 19:35:02.719054] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:37.005 accel_perf options: 00:06:37.005 [-h help message] 00:06:37.005 [-q queue depth per core] 00:06:37.005 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.005 [-T number of threads per core 00:06:37.005 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.005 [-t time in seconds] 00:06:37.005 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.005 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:37.005 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.005 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.005 [-S for crc32c workload, use this seed value (default 0) 00:06:37.005 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.005 [-f for fill workload, use this BYTE value (default 255) 00:06:37.005 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.005 [-y verify result if this switch is on] 00:06:37.005 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.005 Can be used to spread operations across a wider range of memory. 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.005 00:06:37.005 real 0m0.027s 00:06:37.005 user 0m0.016s 00:06:37.005 sys 0m0.010s 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.005 19:35:02 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:37.005 ************************************ 00:06:37.005 END TEST accel_negative_buffers 00:06:37.005 ************************************ 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.005 19:35:02 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.005 19:35:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.005 ************************************ 00:06:37.005 START TEST accel_crc32c 00:06:37.005 ************************************ 00:06:37.005 19:35:02 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:37.005 19:35:02 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:37.263 [2024-07-15 19:35:02.795006] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:37.263 [2024-07-15 19:35:02.795119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63731 ] 00:06:37.263 [2024-07-15 19:35:02.933045] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.522 [2024-07-15 19:35:03.065669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.522 19:35:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:38.895 19:35:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.895 00:06:38.895 real 0m1.525s 00:06:38.895 user 0m1.314s 00:06:38.895 sys 0m0.117s 00:06:38.895 19:35:04 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.895 19:35:04 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:38.895 ************************************ 00:06:38.895 END TEST accel_crc32c 00:06:38.895 ************************************ 00:06:38.895 19:35:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.895 19:35:04 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:38.895 19:35:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:38.895 19:35:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.895 19:35:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.895 ************************************ 00:06:38.896 START TEST accel_crc32c_C2 00:06:38.896 ************************************ 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:38.896 [2024-07-15 19:35:04.373791] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:38.896 [2024-07-15 19:35:04.373877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63765 ] 00:06:38.896 [2024-07-15 19:35:04.505154] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.896 [2024-07-15 19:35:04.607656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.896 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.154 19:35:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.095 00:06:40.095 real 0m1.489s 00:06:40.095 user 0m1.281s 00:06:40.095 sys 0m0.115s 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.095 19:35:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:40.095 ************************************ 00:06:40.095 END TEST accel_crc32c_C2 00:06:40.095 ************************************ 00:06:40.353 19:35:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.353 19:35:05 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:40.353 19:35:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:40.353 19:35:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.353 19:35:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.353 ************************************ 00:06:40.353 START TEST accel_copy 00:06:40.353 ************************************ 00:06:40.353 19:35:05 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:40.353 19:35:05 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:40.353 [2024-07-15 19:35:05.917563] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:40.353 [2024-07-15 19:35:05.917657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63800 ] 00:06:40.353 [2024-07-15 19:35:06.051368] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.611 [2024-07-15 19:35:06.177975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.611 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.612 19:35:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.992 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:41.993 19:35:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.993 00:06:41.993 real 0m1.528s 00:06:41.993 user 0m1.305s 00:06:41.993 sys 0m0.124s 00:06:41.993 19:35:07 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.993 ************************************ 00:06:41.993 END TEST accel_copy 00:06:41.993 ************************************ 00:06:41.993 19:35:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:41.993 19:35:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.993 19:35:07 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.993 19:35:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:41.993 19:35:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.993 19:35:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.993 ************************************ 00:06:41.993 START TEST accel_fill 00:06:41.993 ************************************ 00:06:41.993 19:35:07 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:41.993 19:35:07 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:41.993 [2024-07-15 19:35:07.505502] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:41.993 [2024-07-15 19:35:07.505647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63834 ] 00:06:41.993 [2024-07-15 19:35:07.646207] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.253 [2024-07-15 19:35:07.783664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:42.253 19:35:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.625 19:35:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.625 19:35:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.625 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:43.626 19:35:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.626 00:06:43.626 real 0m1.568s 00:06:43.626 user 0m1.341s 00:06:43.626 sys 0m0.130s 00:06:43.626 19:35:09 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.626 19:35:09 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:43.626 ************************************ 00:06:43.626 END TEST accel_fill 00:06:43.626 ************************************ 00:06:43.626 19:35:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.626 19:35:09 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:43.626 19:35:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:43.626 19:35:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.626 19:35:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.626 ************************************ 00:06:43.626 START TEST accel_copy_crc32c 00:06:43.626 ************************************ 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:43.626 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:43.626 [2024-07-15 19:35:09.128259] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:43.626 [2024-07-15 19:35:09.128386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63869 ] 00:06:43.626 [2024-07-15 19:35:09.268047] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.626 [2024-07-15 19:35:09.390755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.884 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.885 19:35:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.258 00:06:45.258 real 0m1.521s 00:06:45.258 user 0m1.306s 00:06:45.258 sys 0m0.120s 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.258 19:35:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:45.258 ************************************ 00:06:45.258 END TEST accel_copy_crc32c 00:06:45.258 ************************************ 00:06:45.258 19:35:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.258 19:35:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:45.258 19:35:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:45.258 19:35:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.258 19:35:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.258 ************************************ 00:06:45.258 START TEST accel_copy_crc32c_C2 00:06:45.258 ************************************ 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.258 19:35:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:45.258 [2024-07-15 19:35:10.694218] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:45.258 [2024-07-15 19:35:10.694929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63909 ] 00:06:45.258 [2024-07-15 19:35:10.834145] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.258 [2024-07-15 19:35:10.962886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:45.258 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:45.259 19:35:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:46.631 ************************************ 00:06:46.631 END TEST accel_copy_crc32c_C2 00:06:46.631 ************************************ 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.631 00:06:46.631 real 0m1.527s 00:06:46.631 user 0m1.305s 00:06:46.631 sys 0m0.124s 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.631 19:35:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:46.631 19:35:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.631 19:35:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:46.631 19:35:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:46.631 19:35:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.631 19:35:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.631 ************************************ 00:06:46.631 START TEST accel_dualcast 00:06:46.631 ************************************ 00:06:46.631 19:35:12 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:46.631 19:35:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:46.631 [2024-07-15 19:35:12.268486] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:46.631 [2024-07-15 19:35:12.268565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63938 ] 00:06:46.631 [2024-07-15 19:35:12.403453] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.890 [2024-07-15 19:35:12.530583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.890 19:35:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.274 19:35:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:48.275 19:35:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.275 00:06:48.275 real 0m1.514s 00:06:48.275 user 0m1.306s 00:06:48.275 sys 0m0.114s 00:06:48.275 19:35:13 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.275 ************************************ 00:06:48.275 END TEST accel_dualcast 00:06:48.275 ************************************ 00:06:48.275 19:35:13 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 19:35:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.275 19:35:13 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:48.275 19:35:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.275 19:35:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.275 19:35:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.275 ************************************ 00:06:48.275 START TEST accel_compare 00:06:48.275 ************************************ 00:06:48.275 19:35:13 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:48.275 19:35:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:48.275 [2024-07-15 19:35:13.829326] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:48.275 [2024-07-15 19:35:13.829453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63978 ] 00:06:48.275 [2024-07-15 19:35:13.967977] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.559 [2024-07-15 19:35:14.089288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:48.559 19:35:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.936 19:35:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.936 19:35:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.936 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.936 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.936 19:35:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.936 19:35:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.936 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:49.937 19:35:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.937 00:06:49.937 real 0m1.502s 00:06:49.937 user 0m1.296s 00:06:49.937 sys 0m0.111s 00:06:49.937 19:35:15 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.937 19:35:15 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:49.937 ************************************ 00:06:49.937 END TEST accel_compare 00:06:49.937 ************************************ 00:06:49.937 19:35:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.937 19:35:15 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:49.937 19:35:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.937 19:35:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.937 19:35:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.937 ************************************ 00:06:49.937 START TEST accel_xor 00:06:49.937 ************************************ 00:06:49.937 19:35:15 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:49.937 [2024-07-15 19:35:15.386095] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:49.937 [2024-07-15 19:35:15.386272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64007 ] 00:06:49.937 [2024-07-15 19:35:15.525883] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.937 [2024-07-15 19:35:15.630123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.937 19:35:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 ************************************ 00:06:51.310 END TEST accel_xor 00:06:51.310 ************************************ 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.310 00:06:51.310 real 0m1.502s 00:06:51.310 user 0m1.287s 00:06:51.310 sys 0m0.120s 00:06:51.310 19:35:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.310 19:35:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:51.310 19:35:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.310 19:35:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:51.310 19:35:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.310 19:35:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.310 19:35:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.310 ************************************ 00:06:51.310 START TEST accel_xor 00:06:51.310 ************************************ 00:06:51.310 19:35:16 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.310 19:35:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.310 [2024-07-15 19:35:16.932152] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:51.310 [2024-07-15 19:35:16.932252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64047 ] 00:06:51.310 [2024-07-15 19:35:17.063588] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.568 [2024-07-15 19:35:17.168314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.568 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.569 19:35:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:52.971 19:35:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.971 00:06:52.971 real 0m1.483s 00:06:52.971 user 0m1.273s 00:06:52.971 sys 0m0.116s 00:06:52.971 19:35:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.971 19:35:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:52.971 ************************************ 00:06:52.971 END TEST accel_xor 00:06:52.971 ************************************ 00:06:52.971 19:35:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.971 19:35:18 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:52.971 19:35:18 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:52.971 19:35:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.971 19:35:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.971 ************************************ 00:06:52.971 START TEST accel_dif_verify 00:06:52.971 ************************************ 00:06:52.971 19:35:18 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:52.971 19:35:18 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:52.971 [2024-07-15 19:35:18.462226] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:52.971 [2024-07-15 19:35:18.462326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64076 ] 00:06:52.971 [2024-07-15 19:35:18.595497] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.971 [2024-07-15 19:35:18.715941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:53.229 19:35:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:54.601 19:35:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.601 00:06:54.601 real 0m1.533s 00:06:54.601 user 0m1.316s 00:06:54.601 sys 0m0.123s 00:06:54.601 19:35:19 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.601 19:35:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:54.601 ************************************ 00:06:54.601 END TEST accel_dif_verify 00:06:54.601 ************************************ 00:06:54.601 19:35:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.601 19:35:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:54.601 19:35:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:54.601 19:35:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.601 19:35:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.601 ************************************ 00:06:54.601 START TEST accel_dif_generate 00:06:54.601 ************************************ 00:06:54.601 19:35:20 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:54.601 [2024-07-15 19:35:20.043650] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:54.601 [2024-07-15 19:35:20.043737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64116 ] 00:06:54.601 [2024-07-15 19:35:20.178015] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.601 [2024-07-15 19:35:20.302520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:54.601 19:35:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.973 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:55.974 19:35:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.974 00:06:55.974 real 0m1.508s 00:06:55.974 user 0m1.295s 00:06:55.974 sys 0m0.117s 00:06:55.974 19:35:21 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.974 ************************************ 00:06:55.974 END TEST accel_dif_generate 00:06:55.974 ************************************ 00:06:55.974 19:35:21 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:55.974 19:35:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.974 19:35:21 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:55.974 19:35:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:55.974 19:35:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.974 19:35:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.974 ************************************ 00:06:55.974 START TEST accel_dif_generate_copy 00:06:55.974 ************************************ 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:55.974 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:55.974 [2024-07-15 19:35:21.604205] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:55.974 [2024-07-15 19:35:21.604319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64147 ] 00:06:55.974 [2024-07-15 19:35:21.743123] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.233 [2024-07-15 19:35:21.865960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.233 19:35:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.609 00:06:57.609 real 0m1.515s 00:06:57.609 user 0m1.302s 00:06:57.609 sys 0m0.117s 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.609 19:35:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.609 ************************************ 00:06:57.609 END TEST accel_dif_generate_copy 00:06:57.609 ************************************ 00:06:57.609 19:35:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.609 19:35:23 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:57.610 19:35:23 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.610 19:35:23 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:57.610 19:35:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.610 19:35:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.610 ************************************ 00:06:57.610 START TEST accel_comp 00:06:57.610 ************************************ 00:06:57.610 19:35:23 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:57.610 19:35:23 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:57.610 [2024-07-15 19:35:23.162093] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:57.610 [2024-07-15 19:35:23.162187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64187 ] 00:06:57.610 [2024-07-15 19:35:23.294527] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.868 [2024-07-15 19:35:23.409576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:57.868 19:35:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.243 19:35:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.244 19:35:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.244 19:35:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:59.244 19:35:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.244 00:06:59.244 real 0m1.498s 00:06:59.244 user 0m1.292s 00:06:59.244 sys 0m0.114s 00:06:59.244 19:35:24 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.244 ************************************ 00:06:59.244 END TEST accel_comp 00:06:59.244 ************************************ 00:06:59.244 19:35:24 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:59.244 19:35:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.244 19:35:24 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.244 19:35:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:59.244 19:35:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.244 19:35:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.244 ************************************ 00:06:59.244 START TEST accel_decomp 00:06:59.244 ************************************ 00:06:59.244 19:35:24 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:59.244 19:35:24 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:59.244 [2024-07-15 19:35:24.714183] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:06:59.244 [2024-07-15 19:35:24.714286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64216 ] 00:06:59.244 [2024-07-15 19:35:24.848922] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.244 [2024-07-15 19:35:25.002268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.503 19:35:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.878 19:35:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.878 00:07:00.878 real 0m1.563s 00:07:00.878 user 0m1.337s 00:07:00.878 sys 0m0.130s 00:07:00.878 19:35:26 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.878 19:35:26 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:00.878 ************************************ 00:07:00.878 END TEST accel_decomp 00:07:00.878 ************************************ 00:07:00.878 19:35:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.878 19:35:26 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.878 19:35:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:00.878 19:35:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.878 19:35:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.878 ************************************ 00:07:00.878 START TEST accel_decomp_full 00:07:00.878 ************************************ 00:07:00.878 19:35:26 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:00.878 [2024-07-15 19:35:26.318993] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:00.878 [2024-07-15 19:35:26.319076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64256 ] 00:07:00.878 [2024-07-15 19:35:26.449612] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.878 [2024-07-15 19:35:26.556506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:00.878 19:35:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.252 19:35:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.253 19:35:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 19:35:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 19:35:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.253 19:35:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.253 19:35:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.253 00:07:02.253 real 0m1.491s 00:07:02.253 user 0m1.288s 00:07:02.253 sys 0m0.113s 00:07:02.253 19:35:27 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.253 ************************************ 00:07:02.253 19:35:27 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 END TEST accel_decomp_full 00:07:02.253 ************************************ 00:07:02.253 19:35:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.253 19:35:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.253 19:35:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:02.253 19:35:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.253 19:35:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 ************************************ 00:07:02.253 START TEST accel_decomp_mcore 00:07:02.253 ************************************ 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:02.253 19:35:27 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:02.253 [2024-07-15 19:35:27.864315] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:02.253 [2024-07-15 19:35:27.864410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64286 ] 00:07:02.253 [2024-07-15 19:35:27.999439] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.511 [2024-07-15 19:35:28.122779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.511 [2024-07-15 19:35:28.122888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.511 [2024-07-15 19:35:28.123621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.511 [2024-07-15 19:35:28.123668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.511 19:35:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.883 00:07:03.883 real 0m1.555s 00:07:03.883 user 0m0.021s 00:07:03.883 sys 0m0.003s 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.883 19:35:29 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:03.883 ************************************ 00:07:03.883 END TEST accel_decomp_mcore 00:07:03.883 ************************************ 00:07:03.883 19:35:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.883 19:35:29 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.883 19:35:29 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:03.883 19:35:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.883 19:35:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.883 ************************************ 00:07:03.883 START TEST accel_decomp_full_mcore 00:07:03.883 ************************************ 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:03.883 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:03.883 [2024-07-15 19:35:29.465888] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:03.883 [2024-07-15 19:35:29.465982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64331 ] 00:07:03.884 [2024-07-15 19:35:29.597374] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.142 [2024-07-15 19:35:29.714758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.142 [2024-07-15 19:35:29.714848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.142 [2024-07-15 19:35:29.714966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.142 [2024-07-15 19:35:29.714970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.142 19:35:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.516 00:07:05.516 real 0m1.521s 00:07:05.516 user 0m4.730s 00:07:05.516 sys 0m0.135s 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.516 ************************************ 00:07:05.516 END TEST accel_decomp_full_mcore 00:07:05.516 ************************************ 00:07:05.516 19:35:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:05.516 19:35:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.516 19:35:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:05.516 19:35:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:05.516 19:35:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.516 19:35:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.516 ************************************ 00:07:05.516 START TEST accel_decomp_mthread 00:07:05.516 ************************************ 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.516 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.516 [2024-07-15 19:35:31.036612] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:05.516 [2024-07-15 19:35:31.036739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64369 ] 00:07:05.516 [2024-07-15 19:35:31.167815] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.516 [2024-07-15 19:35:31.288804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.801 19:35:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.735 ************************************ 00:07:06.735 END TEST accel_decomp_mthread 00:07:06.735 ************************************ 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.735 00:07:06.735 real 0m1.502s 00:07:06.735 user 0m1.290s 00:07:06.735 sys 0m0.118s 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.735 19:35:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:06.993 19:35:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.993 19:35:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.993 19:35:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:06.993 19:35:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.993 19:35:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.993 ************************************ 00:07:06.993 START TEST accel_decomp_full_mthread 00:07:06.993 ************************************ 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:06.993 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:06.993 [2024-07-15 19:35:32.590420] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:06.993 [2024-07-15 19:35:32.590517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64403 ] 00:07:06.993 [2024-07-15 19:35:32.730901] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.252 [2024-07-15 19:35:32.835410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.252 19:35:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.624 00:07:08.624 real 0m1.523s 00:07:08.624 user 0m1.314s 00:07:08.624 sys 0m0.118s 00:07:08.624 ************************************ 00:07:08.624 END TEST accel_decomp_full_mthread 00:07:08.624 ************************************ 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.624 19:35:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:08.624 19:35:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.624 19:35:34 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:08.624 19:35:34 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.624 19:35:34 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:08.624 19:35:34 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.624 19:35:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.624 19:35:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.624 19:35:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.624 19:35:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.624 19:35:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.624 19:35:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.624 19:35:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.624 19:35:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:08.624 19:35:34 accel -- accel/accel.sh@41 -- # jq -r . 00:07:08.624 ************************************ 00:07:08.624 START TEST accel_dif_functional_tests 00:07:08.624 ************************************ 00:07:08.624 19:35:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.624 [2024-07-15 19:35:34.198048] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:08.624 [2024-07-15 19:35:34.198246] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64439 ] 00:07:08.624 [2024-07-15 19:35:34.344627] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.882 [2024-07-15 19:35:34.463881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.882 [2024-07-15 19:35:34.464031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.882 [2024-07-15 19:35:34.464040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.882 00:07:08.882 00:07:08.882 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.882 http://cunit.sourceforge.net/ 00:07:08.882 00:07:08.882 00:07:08.882 Suite: accel_dif 00:07:08.882 Test: verify: DIF generated, GUARD check ...passed 00:07:08.882 Test: verify: DIF generated, APPTAG check ...passed 00:07:08.882 Test: verify: DIF generated, REFTAG check ...passed 00:07:08.882 Test: verify: DIF not generated, GUARD check ...passed 00:07:08.882 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 19:35:34.558037] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.882 [2024-07-15 19:35:34.558127] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.882 passed 00:07:08.882 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 19:35:34.558257] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.882 passed 00:07:08.882 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:08.882 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 19:35:34.558402] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:08.882 passed 00:07:08.882 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:08.882 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:08.882 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:08.882 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:08.882 Test: verify copy: DIF generated, GUARD check ...[2024-07-15 19:35:34.558752] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:08.882 passed 00:07:08.882 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:08.882 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:08.882 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 19:35:34.559103] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.882 passed 00:07:08.882 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 19:35:34.559716] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.882 passed 00:07:08.882 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 19:35:34.559999] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.882 passed 00:07:08.882 Test: generate copy: DIF generated, GUARD check ...passed 00:07:08.882 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:08.882 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:08.882 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:08.882 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:08.882 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:08.882 Test: generate copy: iovecs-len validate ...passed 00:07:08.882 Test: generate copy: buffer alignment validate ...[2024-07-15 19:35:34.560743] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:08.882 passed 00:07:08.882 00:07:08.882 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.882 suites 1 1 n/a 0 0 00:07:08.882 tests 26 26 26 0 0 00:07:08.882 asserts 115 115 115 0 n/a 00:07:08.882 00:07:08.882 Elapsed time = 0.007 seconds 00:07:09.188 ************************************ 00:07:09.188 END TEST accel_dif_functional_tests 00:07:09.188 ************************************ 00:07:09.188 00:07:09.188 real 0m0.655s 00:07:09.188 user 0m0.850s 00:07:09.188 sys 0m0.167s 00:07:09.188 19:35:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.188 19:35:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:09.188 19:35:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.188 ************************************ 00:07:09.188 END TEST accel 00:07:09.188 ************************************ 00:07:09.188 00:07:09.188 real 0m35.080s 00:07:09.188 user 0m36.731s 00:07:09.188 sys 0m4.096s 00:07:09.188 19:35:34 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.188 19:35:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.188 19:35:34 -- common/autotest_common.sh@1142 -- # return 0 00:07:09.188 19:35:34 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:09.188 19:35:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.188 19:35:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.188 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:07:09.188 ************************************ 00:07:09.188 START TEST accel_rpc 00:07:09.188 ************************************ 00:07:09.188 19:35:34 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:09.188 * Looking for test storage... 00:07:09.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:09.446 19:35:34 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.446 19:35:34 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64503 00:07:09.446 19:35:34 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:09.446 19:35:34 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64503 00:07:09.446 19:35:34 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64503 ']' 00:07:09.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.446 19:35:34 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.446 19:35:34 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.446 19:35:34 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.446 19:35:34 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.446 19:35:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.446 [2024-07-15 19:35:35.039252] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:09.446 [2024-07-15 19:35:35.039673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64503 ] 00:07:09.446 [2024-07-15 19:35:35.178175] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.704 [2024-07-15 19:35:35.299316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.269 19:35:36 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.269 19:35:36 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:10.269 19:35:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:10.269 19:35:36 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:10.269 19:35:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:10.269 19:35:36 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:10.269 19:35:36 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:10.270 19:35:36 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.270 19:35:36 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.270 19:35:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 ************************************ 00:07:10.270 START TEST accel_assign_opcode 00:07:10.270 ************************************ 00:07:10.270 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:10.270 19:35:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:10.270 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.270 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 [2024-07-15 19:35:36.047846] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:10.527 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.528 [2024-07-15 19:35:36.055844] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:10.528 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.786 software 00:07:10.786 00:07:10.786 real 0m0.297s 00:07:10.786 user 0m0.046s 00:07:10.786 sys 0m0.013s 00:07:10.786 ************************************ 00:07:10.786 END TEST accel_assign_opcode 00:07:10.786 ************************************ 00:07:10.786 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.786 19:35:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:10.786 19:35:36 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64503 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64503 ']' 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64503 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64503 00:07:10.786 killing process with pid 64503 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64503' 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@967 -- # kill 64503 00:07:10.786 19:35:36 accel_rpc -- common/autotest_common.sh@972 -- # wait 64503 00:07:11.043 ************************************ 00:07:11.044 END TEST accel_rpc 00:07:11.044 ************************************ 00:07:11.044 00:07:11.044 real 0m1.908s 00:07:11.044 user 0m2.012s 00:07:11.044 sys 0m0.456s 00:07:11.044 19:35:36 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.044 19:35:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 19:35:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:11.302 19:35:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:11.302 19:35:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.302 19:35:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.302 19:35:36 -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 ************************************ 00:07:11.302 START TEST app_cmdline 00:07:11.302 ************************************ 00:07:11.302 19:35:36 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:11.302 * Looking for test storage... 00:07:11.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:11.302 19:35:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.302 19:35:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64614 00:07:11.302 19:35:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.302 19:35:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64614 00:07:11.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.302 19:35:36 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64614 ']' 00:07:11.302 19:35:36 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.302 19:35:36 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.302 19:35:36 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.302 19:35:36 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.302 19:35:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 [2024-07-15 19:35:36.976660] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:11.302 [2024-07-15 19:35:36.977045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64614 ] 00:07:11.561 [2024-07-15 19:35:37.106646] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.561 [2024-07-15 19:35:37.235193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.496 19:35:37 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.497 19:35:37 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:12.497 19:35:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:12.497 { 00:07:12.497 "fields": { 00:07:12.497 "commit": "c9ef451fa", 00:07:12.497 "major": 24, 00:07:12.497 "minor": 9, 00:07:12.497 "patch": 0, 00:07:12.497 "suffix": "-pre" 00:07:12.497 }, 00:07:12.497 "version": "SPDK v24.09-pre git sha1 c9ef451fa" 00:07:12.497 } 00:07:12.497 19:35:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.497 19:35:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.497 19:35:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.497 19:35:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.497 19:35:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.497 19:35:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.497 19:35:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:12.497 19:35:38 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.497 19:35:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.497 19:35:38 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.755 19:35:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.755 19:35:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.755 19:35:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:12.755 19:35:38 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.755 2024/07/15 19:35:38 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:12.755 request: 00:07:12.755 { 00:07:12.755 "method": "env_dpdk_get_mem_stats", 00:07:12.755 "params": {} 00:07:12.755 } 00:07:12.755 Got JSON-RPC error response 00:07:12.755 GoRPCClient: error on JSON-RPC call 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.013 19:35:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64614 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64614 ']' 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64614 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64614 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64614' 00:07:13.013 killing process with pid 64614 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@967 -- # kill 64614 00:07:13.013 19:35:38 app_cmdline -- common/autotest_common.sh@972 -- # wait 64614 00:07:13.271 00:07:13.271 real 0m2.129s 00:07:13.271 user 0m2.647s 00:07:13.271 sys 0m0.500s 00:07:13.271 19:35:38 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.271 19:35:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.271 ************************************ 00:07:13.271 END TEST app_cmdline 00:07:13.271 ************************************ 00:07:13.271 19:35:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.271 19:35:39 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:13.271 19:35:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.271 19:35:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.272 19:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.272 ************************************ 00:07:13.272 START TEST version 00:07:13.272 ************************************ 00:07:13.272 19:35:39 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:13.530 * Looking for test storage... 00:07:13.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:13.530 19:35:39 version -- app/version.sh@17 -- # get_header_version major 00:07:13.530 19:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # cut -f2 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.530 19:35:39 version -- app/version.sh@17 -- # major=24 00:07:13.530 19:35:39 version -- app/version.sh@18 -- # get_header_version minor 00:07:13.530 19:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # cut -f2 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.530 19:35:39 version -- app/version.sh@18 -- # minor=9 00:07:13.530 19:35:39 version -- app/version.sh@19 -- # get_header_version patch 00:07:13.530 19:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # cut -f2 00:07:13.530 19:35:39 version -- app/version.sh@19 -- # patch=0 00:07:13.530 19:35:39 version -- app/version.sh@20 -- # get_header_version suffix 00:07:13.530 19:35:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # cut -f2 00:07:13.530 19:35:39 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.530 19:35:39 version -- app/version.sh@20 -- # suffix=-pre 00:07:13.530 19:35:39 version -- app/version.sh@22 -- # version=24.9 00:07:13.530 19:35:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:13.530 19:35:39 version -- app/version.sh@28 -- # version=24.9rc0 00:07:13.530 19:35:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:13.530 19:35:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:13.530 19:35:39 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:13.530 19:35:39 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:13.530 00:07:13.530 real 0m0.143s 00:07:13.530 user 0m0.084s 00:07:13.530 sys 0m0.090s 00:07:13.530 19:35:39 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.530 19:35:39 version -- common/autotest_common.sh@10 -- # set +x 00:07:13.530 ************************************ 00:07:13.531 END TEST version 00:07:13.531 ************************************ 00:07:13.531 19:35:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.531 19:35:39 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@198 -- # uname -s 00:07:13.531 19:35:39 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:13.531 19:35:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:13.531 19:35:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:13.531 19:35:39 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:13.531 19:35:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.531 19:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.531 19:35:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:13.531 19:35:39 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:13.531 19:35:39 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.531 19:35:39 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.531 19:35:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.531 19:35:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.531 ************************************ 00:07:13.531 START TEST nvmf_tcp 00:07:13.531 ************************************ 00:07:13.531 19:35:39 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:13.790 * Looking for test storage... 00:07:13.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.790 19:35:39 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.790 19:35:39 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.790 19:35:39 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.790 19:35:39 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.790 19:35:39 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.790 19:35:39 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.790 19:35:39 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:13.790 19:35:39 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:13.790 19:35:39 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.790 19:35:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:13.790 19:35:39 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:13.790 19:35:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.790 19:35:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.790 19:35:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.790 ************************************ 00:07:13.790 START TEST nvmf_example 00:07:13.790 ************************************ 00:07:13.790 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:13.790 * Looking for test storage... 00:07:13.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:13.790 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:13.790 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:13.790 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.790 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:13.791 Cannot find device "nvmf_init_br" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:13.791 Cannot find device "nvmf_tgt_br" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:13.791 Cannot find device "nvmf_tgt_br2" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:13.791 Cannot find device "nvmf_init_br" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:13.791 Cannot find device "nvmf_tgt_br" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:13.791 Cannot find device "nvmf_tgt_br2" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:13.791 Cannot find device "nvmf_br" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:13.791 Cannot find device "nvmf_init_if" 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:13.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:13.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:13.791 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:14.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:07:14.050 00:07:14.050 --- 10.0.0.2 ping statistics --- 00:07:14.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.050 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:14.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:14.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:14.050 00:07:14.050 --- 10.0.0.3 ping statistics --- 00:07:14.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.050 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:14.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:14.050 00:07:14.050 --- 10.0.0.1 ping statistics --- 00:07:14.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.050 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.050 19:35:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64975 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:14.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64975 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 64975 ']' 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.309 19:35:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.244 19:35:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.244 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.503 19:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.503 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.503 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.503 19:35:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.503 19:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:15.503 19:35:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:25.517 Initializing NVMe Controllers 00:07:25.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:25.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:25.517 Initialization complete. Launching workers. 00:07:25.517 ======================================================== 00:07:25.517 Latency(us) 00:07:25.517 Device Information : IOPS MiB/s Average min max 00:07:25.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15150.80 59.18 4225.22 813.35 22174.40 00:07:25.517 ======================================================== 00:07:25.517 Total : 15150.80 59.18 4225.22 813.35 22174.40 00:07:25.517 00:07:25.517 19:35:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:25.517 19:35:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:25.517 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.517 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.773 rmmod nvme_tcp 00:07:25.773 rmmod nvme_fabrics 00:07:25.773 rmmod nvme_keyring 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64975 ']' 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64975 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 64975 ']' 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 64975 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64975 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:25.773 killing process with pid 64975 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64975' 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 64975 00:07:25.773 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 64975 00:07:26.031 nvmf threads initialize successfully 00:07:26.031 bdev subsystem init successfully 00:07:26.031 created a nvmf target service 00:07:26.031 create targets's poll groups done 00:07:26.031 all subsystems of target started 00:07:26.031 nvmf target is running 00:07:26.031 all subsystems of target stopped 00:07:26.031 destroy targets's poll groups done 00:07:26.031 destroyed the nvmf target service 00:07:26.031 bdev subsystem finish successfully 00:07:26.031 nvmf threads destroy successfully 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:26.031 00:07:26.031 real 0m12.326s 00:07:26.031 user 0m44.278s 00:07:26.031 sys 0m2.069s 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.031 ************************************ 00:07:26.031 END TEST nvmf_example 00:07:26.031 ************************************ 00:07:26.031 19:35:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:26.031 19:35:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:26.031 19:35:51 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:26.032 19:35:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.032 19:35:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.032 19:35:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.032 ************************************ 00:07:26.032 START TEST nvmf_filesystem 00:07:26.032 ************************************ 00:07:26.032 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:26.292 * Looking for test storage... 00:07:26.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:26.292 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:26.293 #define SPDK_CONFIG_H 00:07:26.293 #define SPDK_CONFIG_APPS 1 00:07:26.293 #define SPDK_CONFIG_ARCH native 00:07:26.293 #undef SPDK_CONFIG_ASAN 00:07:26.293 #define SPDK_CONFIG_AVAHI 1 00:07:26.293 #undef SPDK_CONFIG_CET 00:07:26.293 #define SPDK_CONFIG_COVERAGE 1 00:07:26.293 #define SPDK_CONFIG_CROSS_PREFIX 00:07:26.293 #undef SPDK_CONFIG_CRYPTO 00:07:26.293 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:26.293 #undef SPDK_CONFIG_CUSTOMOCF 00:07:26.293 #undef SPDK_CONFIG_DAOS 00:07:26.293 #define SPDK_CONFIG_DAOS_DIR 00:07:26.293 #define SPDK_CONFIG_DEBUG 1 00:07:26.293 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:26.293 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:26.293 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:26.293 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:26.293 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:26.293 #undef SPDK_CONFIG_DPDK_UADK 00:07:26.293 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:26.293 #define SPDK_CONFIG_EXAMPLES 1 00:07:26.293 #undef SPDK_CONFIG_FC 00:07:26.293 #define SPDK_CONFIG_FC_PATH 00:07:26.293 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:26.293 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:26.293 #undef SPDK_CONFIG_FUSE 00:07:26.293 #undef SPDK_CONFIG_FUZZER 00:07:26.293 #define SPDK_CONFIG_FUZZER_LIB 00:07:26.293 #define SPDK_CONFIG_GOLANG 1 00:07:26.293 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:26.293 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:26.293 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:26.293 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:26.293 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:26.293 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:26.293 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:26.293 #define SPDK_CONFIG_IDXD 1 00:07:26.293 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:26.293 #undef SPDK_CONFIG_IPSEC_MB 00:07:26.293 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:26.293 #define SPDK_CONFIG_ISAL 1 00:07:26.293 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:26.293 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:26.293 #define SPDK_CONFIG_LIBDIR 00:07:26.293 #undef SPDK_CONFIG_LTO 00:07:26.293 #define SPDK_CONFIG_MAX_LCORES 128 00:07:26.293 #define SPDK_CONFIG_NVME_CUSE 1 00:07:26.293 #undef SPDK_CONFIG_OCF 00:07:26.293 #define SPDK_CONFIG_OCF_PATH 00:07:26.293 #define SPDK_CONFIG_OPENSSL_PATH 00:07:26.293 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:26.293 #define SPDK_CONFIG_PGO_DIR 00:07:26.293 #undef SPDK_CONFIG_PGO_USE 00:07:26.293 #define SPDK_CONFIG_PREFIX /usr/local 00:07:26.293 #undef SPDK_CONFIG_RAID5F 00:07:26.293 #undef SPDK_CONFIG_RBD 00:07:26.293 #define SPDK_CONFIG_RDMA 1 00:07:26.293 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:26.293 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:26.293 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:26.293 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:26.293 #define SPDK_CONFIG_SHARED 1 00:07:26.293 #undef SPDK_CONFIG_SMA 00:07:26.293 #define SPDK_CONFIG_TESTS 1 00:07:26.293 #undef SPDK_CONFIG_TSAN 00:07:26.293 #define SPDK_CONFIG_UBLK 1 00:07:26.293 #define SPDK_CONFIG_UBSAN 1 00:07:26.293 #undef SPDK_CONFIG_UNIT_TESTS 00:07:26.293 #undef SPDK_CONFIG_URING 00:07:26.293 #define SPDK_CONFIG_URING_PATH 00:07:26.293 #undef SPDK_CONFIG_URING_ZNS 00:07:26.293 #define SPDK_CONFIG_USDT 1 00:07:26.293 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:26.293 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:26.293 #undef SPDK_CONFIG_VFIO_USER 00:07:26.293 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:26.293 #define SPDK_CONFIG_VHOST 1 00:07:26.293 #define SPDK_CONFIG_VIRTIO 1 00:07:26.293 #undef SPDK_CONFIG_VTUNE 00:07:26.293 #define SPDK_CONFIG_VTUNE_DIR 00:07:26.293 #define SPDK_CONFIG_WERROR 1 00:07:26.293 #define SPDK_CONFIG_WPDK_DIR 00:07:26.293 #undef SPDK_CONFIG_XNVME 00:07:26.293 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:26.293 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:26.294 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65218 ]] 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65218 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.QTtvX3 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.QTtvX3/tests/target /tmp/spdk.QTtvX3 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264508416 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267883520 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:07:26.295 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785624576 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244424192 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785624576 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244424192 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=93459279872 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6243500032 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:26.296 * Looking for test storage... 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13785624576 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.296 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.297 19:35:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:26.297 Cannot find device "nvmf_tgt_br" 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.297 Cannot find device "nvmf_tgt_br2" 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:26.297 Cannot find device "nvmf_tgt_br" 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:26.297 Cannot find device "nvmf_tgt_br2" 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:26.297 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.557 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:26.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:07:26.815 00:07:26.815 --- 10.0.0.2 ping statistics --- 00:07:26.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.815 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:26.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:26.815 00:07:26.815 --- 10.0.0.3 ping statistics --- 00:07:26.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.815 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:26.815 00:07:26.815 --- 10.0.0.1 ping statistics --- 00:07:26.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.815 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.815 ************************************ 00:07:26.815 START TEST nvmf_filesystem_no_in_capsule 00:07:26.815 ************************************ 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65380 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65380 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65380 ']' 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.815 19:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.815 [2024-07-15 19:35:52.451553] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:26.815 [2024-07-15 19:35:52.451684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.815 [2024-07-15 19:35:52.593000] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.074 [2024-07-15 19:35:52.722331] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.074 [2024-07-15 19:35:52.722624] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.074 [2024-07-15 19:35:52.722719] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.074 [2024-07-15 19:35:52.722802] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.074 [2024-07-15 19:35:52.722899] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.074 [2024-07-15 19:35:52.723141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.074 [2024-07-15 19:35:52.723229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.074 [2024-07-15 19:35:52.723309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.074 [2024-07-15 19:35:52.723311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 [2024-07-15 19:35:53.501533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 Malloc1 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.009 [2024-07-15 19:35:53.702226] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:28.009 { 00:07:28.009 "aliases": [ 00:07:28.009 "9b6aa49c-c32e-4858-a924-017979478b32" 00:07:28.009 ], 00:07:28.009 "assigned_rate_limits": { 00:07:28.009 "r_mbytes_per_sec": 0, 00:07:28.009 "rw_ios_per_sec": 0, 00:07:28.009 "rw_mbytes_per_sec": 0, 00:07:28.009 "w_mbytes_per_sec": 0 00:07:28.009 }, 00:07:28.009 "block_size": 512, 00:07:28.009 "claim_type": "exclusive_write", 00:07:28.009 "claimed": true, 00:07:28.009 "driver_specific": {}, 00:07:28.009 "memory_domains": [ 00:07:28.009 { 00:07:28.009 "dma_device_id": "system", 00:07:28.009 "dma_device_type": 1 00:07:28.009 }, 00:07:28.009 { 00:07:28.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.009 "dma_device_type": 2 00:07:28.009 } 00:07:28.009 ], 00:07:28.009 "name": "Malloc1", 00:07:28.009 "num_blocks": 1048576, 00:07:28.009 "product_name": "Malloc disk", 00:07:28.009 "supported_io_types": { 00:07:28.009 "abort": true, 00:07:28.009 "compare": false, 00:07:28.009 "compare_and_write": false, 00:07:28.009 "copy": true, 00:07:28.009 "flush": true, 00:07:28.009 "get_zone_info": false, 00:07:28.009 "nvme_admin": false, 00:07:28.009 "nvme_io": false, 00:07:28.009 "nvme_io_md": false, 00:07:28.009 "nvme_iov_md": false, 00:07:28.009 "read": true, 00:07:28.009 "reset": true, 00:07:28.009 "seek_data": false, 00:07:28.009 "seek_hole": false, 00:07:28.009 "unmap": true, 00:07:28.009 "write": true, 00:07:28.009 "write_zeroes": true, 00:07:28.009 "zcopy": true, 00:07:28.009 "zone_append": false, 00:07:28.009 "zone_management": false 00:07:28.009 }, 00:07:28.009 "uuid": "9b6aa49c-c32e-4858-a924-017979478b32", 00:07:28.009 "zoned": false 00:07:28.009 } 00:07:28.009 ]' 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:28.009 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:28.267 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:28.267 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:28.267 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:28.267 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:28.267 19:35:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:28.267 19:35:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:28.267 19:35:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:28.267 19:35:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:28.267 19:35:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:28.267 19:35:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:30.799 19:35:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.734 ************************************ 00:07:31.734 START TEST filesystem_ext4 00:07:31.734 ************************************ 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:31.734 mke2fs 1.46.5 (30-Dec-2021) 00:07:31.734 Discarding device blocks: 0/522240 done 00:07:31.734 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:31.734 Filesystem UUID: fdcd2bbd-5c33-402d-9ac7-822f6367240c 00:07:31.734 Superblock backups stored on blocks: 00:07:31.734 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:31.734 00:07:31.734 Allocating group tables: 0/64 done 00:07:31.734 Writing inode tables: 0/64 done 00:07:31.734 Creating journal (8192 blocks): done 00:07:31.734 Writing superblocks and filesystem accounting information: 0/64 done 00:07:31.734 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.734 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.992 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:31.992 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65380 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.993 ************************************ 00:07:31.993 END TEST filesystem_ext4 00:07:31.993 ************************************ 00:07:31.993 00:07:31.993 real 0m0.386s 00:07:31.993 user 0m0.022s 00:07:31.993 sys 0m0.049s 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.993 ************************************ 00:07:31.993 START TEST filesystem_btrfs 00:07:31.993 ************************************ 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:31.993 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:32.251 btrfs-progs v6.6.2 00:07:32.251 See https://btrfs.readthedocs.io for more information. 00:07:32.251 00:07:32.251 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:32.251 NOTE: several default settings have changed in version 5.15, please make sure 00:07:32.251 this does not affect your deployments: 00:07:32.251 - DUP for metadata (-m dup) 00:07:32.251 - enabled no-holes (-O no-holes) 00:07:32.251 - enabled free-space-tree (-R free-space-tree) 00:07:32.251 00:07:32.251 Label: (null) 00:07:32.251 UUID: c14e2176-2991-415b-a1ed-13a7aabeb0a5 00:07:32.251 Node size: 16384 00:07:32.251 Sector size: 4096 00:07:32.251 Filesystem size: 510.00MiB 00:07:32.251 Block group profiles: 00:07:32.251 Data: single 8.00MiB 00:07:32.251 Metadata: DUP 32.00MiB 00:07:32.251 System: DUP 8.00MiB 00:07:32.251 SSD detected: yes 00:07:32.251 Zoned device: no 00:07:32.251 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:32.251 Runtime features: free-space-tree 00:07:32.251 Checksum: crc32c 00:07:32.251 Number of devices: 1 00:07:32.251 Devices: 00:07:32.251 ID SIZE PATH 00:07:32.251 1 510.00MiB /dev/nvme0n1p1 00:07:32.251 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65380 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.251 00:07:32.251 real 0m0.309s 00:07:32.251 user 0m0.018s 00:07:32.251 sys 0m0.063s 00:07:32.251 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.252 19:35:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 ************************************ 00:07:32.252 END TEST filesystem_btrfs 00:07:32.252 ************************************ 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.252 ************************************ 00:07:32.252 START TEST filesystem_xfs 00:07:32.252 ************************************ 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:32.252 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:32.509 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:32.509 = sectsz=512 attr=2, projid32bit=1 00:07:32.509 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:32.509 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:32.509 data = bsize=4096 blocks=130560, imaxpct=25 00:07:32.509 = sunit=0 swidth=0 blks 00:07:32.509 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:32.509 log =internal log bsize=4096 blocks=16384, version=2 00:07:32.509 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:32.509 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:33.075 Discarding blocks...Done. 00:07:33.075 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:33.075 19:35:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65380 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.603 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.604 00:07:35.604 real 0m3.107s 00:07:35.604 user 0m0.015s 00:07:35.604 sys 0m0.056s 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.604 ************************************ 00:07:35.604 END TEST filesystem_xfs 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:35.604 ************************************ 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65380 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65380 ']' 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65380 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65380 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.604 killing process with pid 65380 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65380' 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65380 00:07:35.604 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65380 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:36.168 00:07:36.168 real 0m9.322s 00:07:36.168 user 0m34.914s 00:07:36.168 sys 0m1.776s 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.168 ************************************ 00:07:36.168 END TEST nvmf_filesystem_no_in_capsule 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.168 ************************************ 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.168 ************************************ 00:07:36.168 START TEST nvmf_filesystem_in_capsule 00:07:36.168 ************************************ 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65693 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65693 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65693 ']' 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.168 19:36:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.168 [2024-07-15 19:36:01.812255] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:36.168 [2024-07-15 19:36:01.812348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.168 [2024-07-15 19:36:01.948469] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.424 [2024-07-15 19:36:02.064350] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.424 [2024-07-15 19:36:02.064417] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.424 [2024-07-15 19:36:02.064429] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.424 [2024-07-15 19:36:02.064437] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.424 [2024-07-15 19:36:02.064445] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.424 [2024-07-15 19:36:02.064654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.424 [2024-07-15 19:36:02.064921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.424 [2024-07-15 19:36:02.065611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.424 [2024-07-15 19:36:02.065620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.987 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.987 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:36.987 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.988 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.988 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 [2024-07-15 19:36:02.796286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 Malloc1 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 [2024-07-15 19:36:02.983780] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:37.245 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:37.246 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:37.246 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:37.246 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:37.246 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:37.246 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.246 19:36:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.246 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.246 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:37.246 { 00:07:37.246 "aliases": [ 00:07:37.246 "813965cc-59c0-44eb-b143-0fd092e9ddd8" 00:07:37.246 ], 00:07:37.246 "assigned_rate_limits": { 00:07:37.246 "r_mbytes_per_sec": 0, 00:07:37.246 "rw_ios_per_sec": 0, 00:07:37.246 "rw_mbytes_per_sec": 0, 00:07:37.246 "w_mbytes_per_sec": 0 00:07:37.246 }, 00:07:37.246 "block_size": 512, 00:07:37.246 "claim_type": "exclusive_write", 00:07:37.246 "claimed": true, 00:07:37.246 "driver_specific": {}, 00:07:37.246 "memory_domains": [ 00:07:37.246 { 00:07:37.246 "dma_device_id": "system", 00:07:37.246 "dma_device_type": 1 00:07:37.246 }, 00:07:37.246 { 00:07:37.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.246 "dma_device_type": 2 00:07:37.246 } 00:07:37.246 ], 00:07:37.246 "name": "Malloc1", 00:07:37.246 "num_blocks": 1048576, 00:07:37.246 "product_name": "Malloc disk", 00:07:37.246 "supported_io_types": { 00:07:37.246 "abort": true, 00:07:37.246 "compare": false, 00:07:37.246 "compare_and_write": false, 00:07:37.246 "copy": true, 00:07:37.246 "flush": true, 00:07:37.246 "get_zone_info": false, 00:07:37.246 "nvme_admin": false, 00:07:37.246 "nvme_io": false, 00:07:37.246 "nvme_io_md": false, 00:07:37.246 "nvme_iov_md": false, 00:07:37.246 "read": true, 00:07:37.246 "reset": true, 00:07:37.246 "seek_data": false, 00:07:37.246 "seek_hole": false, 00:07:37.246 "unmap": true, 00:07:37.246 "write": true, 00:07:37.246 "write_zeroes": true, 00:07:37.246 "zcopy": true, 00:07:37.246 "zone_append": false, 00:07:37.246 "zone_management": false 00:07:37.246 }, 00:07:37.246 "uuid": "813965cc-59c0-44eb-b143-0fd092e9ddd8", 00:07:37.246 "zoned": false 00:07:37.246 } 00:07:37.246 ]' 00:07:37.246 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:37.503 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:37.503 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:37.503 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:37.503 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:37.503 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:37.503 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:37.504 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:37.504 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.504 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:37.504 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.504 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:37.504 19:36:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:40.027 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:40.028 19:36:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.959 ************************************ 00:07:40.959 START TEST filesystem_in_capsule_ext4 00:07:40.959 ************************************ 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:40.959 mke2fs 1.46.5 (30-Dec-2021) 00:07:40.959 Discarding device blocks: 0/522240 done 00:07:40.959 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:40.959 Filesystem UUID: d15f2ef4-d607-463e-b8a7-7a73b1121eb3 00:07:40.959 Superblock backups stored on blocks: 00:07:40.959 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:40.959 00:07:40.959 Allocating group tables: 0/64 done 00:07:40.959 Writing inode tables: 0/64 done 00:07:40.959 Creating journal (8192 blocks): done 00:07:40.959 Writing superblocks and filesystem accounting information: 0/64 done 00:07:40.959 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.959 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65693 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.217 ************************************ 00:07:41.217 END TEST filesystem_in_capsule_ext4 00:07:41.217 ************************************ 00:07:41.217 00:07:41.217 real 0m0.389s 00:07:41.217 user 0m0.016s 00:07:41.217 sys 0m0.057s 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.217 ************************************ 00:07:41.217 START TEST filesystem_in_capsule_btrfs 00:07:41.217 ************************************ 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:41.217 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:41.218 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:41.218 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:41.218 19:36:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.476 btrfs-progs v6.6.2 00:07:41.476 See https://btrfs.readthedocs.io for more information. 00:07:41.476 00:07:41.476 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.476 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.476 this does not affect your deployments: 00:07:41.476 - DUP for metadata (-m dup) 00:07:41.476 - enabled no-holes (-O no-holes) 00:07:41.476 - enabled free-space-tree (-R free-space-tree) 00:07:41.476 00:07:41.476 Label: (null) 00:07:41.476 UUID: f3a83750-ff02-40f0-9b4f-7dead888310a 00:07:41.476 Node size: 16384 00:07:41.476 Sector size: 4096 00:07:41.476 Filesystem size: 510.00MiB 00:07:41.476 Block group profiles: 00:07:41.476 Data: single 8.00MiB 00:07:41.476 Metadata: DUP 32.00MiB 00:07:41.476 System: DUP 8.00MiB 00:07:41.476 SSD detected: yes 00:07:41.476 Zoned device: no 00:07:41.476 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.476 Runtime features: free-space-tree 00:07:41.476 Checksum: crc32c 00:07:41.476 Number of devices: 1 00:07:41.476 Devices: 00:07:41.476 ID SIZE PATH 00:07:41.476 1 510.00MiB /dev/nvme0n1p1 00:07:41.476 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65693 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.476 00:07:41.476 real 0m0.232s 00:07:41.476 user 0m0.024s 00:07:41.476 sys 0m0.067s 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.476 ************************************ 00:07:41.476 END TEST filesystem_in_capsule_btrfs 00:07:41.476 ************************************ 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.476 ************************************ 00:07:41.476 START TEST filesystem_in_capsule_xfs 00:07:41.476 ************************************ 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:41.476 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:41.476 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:41.476 = sectsz=512 attr=2, projid32bit=1 00:07:41.476 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:41.476 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:41.476 data = bsize=4096 blocks=130560, imaxpct=25 00:07:41.476 = sunit=0 swidth=0 blks 00:07:41.476 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:41.476 log =internal log bsize=4096 blocks=16384, version=2 00:07:41.476 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:41.476 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:42.429 Discarding blocks...Done. 00:07:42.429 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:42.429 19:36:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65693 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.331 00:07:44.331 real 0m2.611s 00:07:44.331 user 0m0.018s 00:07:44.331 sys 0m0.052s 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:44.331 ************************************ 00:07:44.331 END TEST filesystem_in_capsule_xfs 00:07:44.331 ************************************ 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:44.331 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65693 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65693 ']' 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65693 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65693 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65693' 00:07:44.332 killing process with pid 65693 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65693 00:07:44.332 19:36:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65693 00:07:44.590 19:36:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:44.590 00:07:44.590 real 0m8.603s 00:07:44.590 user 0m32.304s 00:07:44.590 sys 0m1.591s 00:07:44.590 19:36:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.590 19:36:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.590 ************************************ 00:07:44.590 END TEST nvmf_filesystem_in_capsule 00:07:44.590 ************************************ 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.848 rmmod nvme_tcp 00:07:44.848 rmmod nvme_fabrics 00:07:44.848 rmmod nvme_keyring 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.848 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:44.849 00:07:44.849 real 0m18.780s 00:07:44.849 user 1m7.494s 00:07:44.849 sys 0m3.736s 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.849 19:36:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.849 ************************************ 00:07:44.849 END TEST nvmf_filesystem 00:07:44.849 ************************************ 00:07:44.849 19:36:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:44.849 19:36:10 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:44.849 19:36:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.849 19:36:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.849 19:36:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.849 ************************************ 00:07:44.849 START TEST nvmf_target_discovery 00:07:44.849 ************************************ 00:07:44.849 19:36:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.108 * Looking for test storage... 00:07:45.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:45.108 Cannot find device "nvmf_tgt_br" 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.108 Cannot find device "nvmf_tgt_br2" 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:45.108 Cannot find device "nvmf_tgt_br" 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:45.108 Cannot find device "nvmf_tgt_br2" 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:45.108 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:45.109 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:45.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:07:45.367 00:07:45.367 --- 10.0.0.2 ping statistics --- 00:07:45.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.367 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:45.367 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:45.367 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:07:45.367 00:07:45.367 --- 10.0.0.3 ping statistics --- 00:07:45.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.367 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:45.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:45.367 00:07:45.367 --- 10.0.0.1 ping statistics --- 00:07:45.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.367 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.367 19:36:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66148 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66148 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66148 ']' 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.367 19:36:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.367 [2024-07-15 19:36:11.071495] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:45.367 [2024-07-15 19:36:11.071599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.625 [2024-07-15 19:36:11.213149] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.625 [2024-07-15 19:36:11.330893] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.625 [2024-07-15 19:36:11.330951] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.625 [2024-07-15 19:36:11.330965] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.625 [2024-07-15 19:36:11.330973] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.625 [2024-07-15 19:36:11.330981] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.625 [2024-07-15 19:36:11.331425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.625 [2024-07-15 19:36:11.331846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.625 [2024-07-15 19:36:11.331974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.625 [2024-07-15 19:36:11.331977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 [2024-07-15 19:36:12.171682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 Null1 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 [2024-07-15 19:36:12.233061] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 Null2 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.557 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 Null3 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 Null4 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.558 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 4420 00:07:46.816 00:07:46.816 Discovery Log Number of Records 6, Generation counter 6 00:07:46.816 =====Discovery Log Entry 0====== 00:07:46.816 trtype: tcp 00:07:46.816 adrfam: ipv4 00:07:46.816 subtype: current discovery subsystem 00:07:46.816 treq: not required 00:07:46.816 portid: 0 00:07:46.816 trsvcid: 4420 00:07:46.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:46.816 traddr: 10.0.0.2 00:07:46.816 eflags: explicit discovery connections, duplicate discovery information 00:07:46.816 sectype: none 00:07:46.816 =====Discovery Log Entry 1====== 00:07:46.816 trtype: tcp 00:07:46.816 adrfam: ipv4 00:07:46.816 subtype: nvme subsystem 00:07:46.816 treq: not required 00:07:46.816 portid: 0 00:07:46.816 trsvcid: 4420 00:07:46.816 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:46.816 traddr: 10.0.0.2 00:07:46.816 eflags: none 00:07:46.816 sectype: none 00:07:46.816 =====Discovery Log Entry 2====== 00:07:46.816 trtype: tcp 00:07:46.816 adrfam: ipv4 00:07:46.816 subtype: nvme subsystem 00:07:46.816 treq: not required 00:07:46.816 portid: 0 00:07:46.816 trsvcid: 4420 00:07:46.816 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:46.816 traddr: 10.0.0.2 00:07:46.816 eflags: none 00:07:46.816 sectype: none 00:07:46.816 =====Discovery Log Entry 3====== 00:07:46.816 trtype: tcp 00:07:46.816 adrfam: ipv4 00:07:46.816 subtype: nvme subsystem 00:07:46.816 treq: not required 00:07:46.816 portid: 0 00:07:46.816 trsvcid: 4420 00:07:46.816 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:46.816 traddr: 10.0.0.2 00:07:46.816 eflags: none 00:07:46.816 sectype: none 00:07:46.816 =====Discovery Log Entry 4====== 00:07:46.816 trtype: tcp 00:07:46.816 adrfam: ipv4 00:07:46.816 subtype: nvme subsystem 00:07:46.816 treq: not required 00:07:46.816 portid: 0 00:07:46.816 trsvcid: 4420 00:07:46.816 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:46.816 traddr: 10.0.0.2 00:07:46.816 eflags: none 00:07:46.816 sectype: none 00:07:46.816 =====Discovery Log Entry 5====== 00:07:46.816 trtype: tcp 00:07:46.816 adrfam: ipv4 00:07:46.816 subtype: discovery subsystem referral 00:07:46.816 treq: not required 00:07:46.816 portid: 0 00:07:46.816 trsvcid: 4430 00:07:46.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:46.816 traddr: 10.0.0.2 00:07:46.816 eflags: none 00:07:46.816 sectype: none 00:07:46.816 Perform nvmf subsystem discovery via RPC 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 [ 00:07:46.816 { 00:07:46.816 "allow_any_host": true, 00:07:46.816 "hosts": [], 00:07:46.816 "listen_addresses": [ 00:07:46.816 { 00:07:46.816 "adrfam": "IPv4", 00:07:46.816 "traddr": "10.0.0.2", 00:07:46.816 "trsvcid": "4420", 00:07:46.816 "trtype": "TCP" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:46.816 "subtype": "Discovery" 00:07:46.816 }, 00:07:46.816 { 00:07:46.816 "allow_any_host": true, 00:07:46.816 "hosts": [], 00:07:46.816 "listen_addresses": [ 00:07:46.816 { 00:07:46.816 "adrfam": "IPv4", 00:07:46.816 "traddr": "10.0.0.2", 00:07:46.816 "trsvcid": "4420", 00:07:46.816 "trtype": "TCP" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "max_cntlid": 65519, 00:07:46.816 "max_namespaces": 32, 00:07:46.816 "min_cntlid": 1, 00:07:46.816 "model_number": "SPDK bdev Controller", 00:07:46.816 "namespaces": [ 00:07:46.816 { 00:07:46.816 "bdev_name": "Null1", 00:07:46.816 "name": "Null1", 00:07:46.816 "nguid": "974795D1969147B4850F47BDB6C3EC0C", 00:07:46.816 "nsid": 1, 00:07:46.816 "uuid": "974795d1-9691-47b4-850f-47bdb6c3ec0c" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.816 "serial_number": "SPDK00000000000001", 00:07:46.816 "subtype": "NVMe" 00:07:46.816 }, 00:07:46.816 { 00:07:46.816 "allow_any_host": true, 00:07:46.816 "hosts": [], 00:07:46.816 "listen_addresses": [ 00:07:46.816 { 00:07:46.816 "adrfam": "IPv4", 00:07:46.816 "traddr": "10.0.0.2", 00:07:46.816 "trsvcid": "4420", 00:07:46.816 "trtype": "TCP" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "max_cntlid": 65519, 00:07:46.816 "max_namespaces": 32, 00:07:46.816 "min_cntlid": 1, 00:07:46.816 "model_number": "SPDK bdev Controller", 00:07:46.816 "namespaces": [ 00:07:46.816 { 00:07:46.816 "bdev_name": "Null2", 00:07:46.816 "name": "Null2", 00:07:46.816 "nguid": "37A1F56A7E664CA29170A1ED7A946EE6", 00:07:46.816 "nsid": 1, 00:07:46.816 "uuid": "37a1f56a-7e66-4ca2-9170-a1ed7a946ee6" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:46.816 "serial_number": "SPDK00000000000002", 00:07:46.816 "subtype": "NVMe" 00:07:46.816 }, 00:07:46.816 { 00:07:46.816 "allow_any_host": true, 00:07:46.816 "hosts": [], 00:07:46.816 "listen_addresses": [ 00:07:46.816 { 00:07:46.816 "adrfam": "IPv4", 00:07:46.816 "traddr": "10.0.0.2", 00:07:46.816 "trsvcid": "4420", 00:07:46.816 "trtype": "TCP" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "max_cntlid": 65519, 00:07:46.816 "max_namespaces": 32, 00:07:46.816 "min_cntlid": 1, 00:07:46.816 "model_number": "SPDK bdev Controller", 00:07:46.816 "namespaces": [ 00:07:46.816 { 00:07:46.816 "bdev_name": "Null3", 00:07:46.816 "name": "Null3", 00:07:46.816 "nguid": "F8B9C289ACC04D07B35FB45D4B53A5D3", 00:07:46.816 "nsid": 1, 00:07:46.816 "uuid": "f8b9c289-acc0-4d07-b35f-b45d4b53a5d3" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:46.816 "serial_number": "SPDK00000000000003", 00:07:46.816 "subtype": "NVMe" 00:07:46.816 }, 00:07:46.816 { 00:07:46.816 "allow_any_host": true, 00:07:46.816 "hosts": [], 00:07:46.816 "listen_addresses": [ 00:07:46.816 { 00:07:46.816 "adrfam": "IPv4", 00:07:46.816 "traddr": "10.0.0.2", 00:07:46.816 "trsvcid": "4420", 00:07:46.816 "trtype": "TCP" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "max_cntlid": 65519, 00:07:46.816 "max_namespaces": 32, 00:07:46.816 "min_cntlid": 1, 00:07:46.816 "model_number": "SPDK bdev Controller", 00:07:46.816 "namespaces": [ 00:07:46.816 { 00:07:46.816 "bdev_name": "Null4", 00:07:46.816 "name": "Null4", 00:07:46.816 "nguid": "6F59F269E6354920B7125B8F4BB9DB75", 00:07:46.816 "nsid": 1, 00:07:46.816 "uuid": "6f59f269-e635-4920-b712-5b8f4bb9db75" 00:07:46.816 } 00:07:46.816 ], 00:07:46.816 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:46.816 "serial_number": "SPDK00000000000004", 00:07:46.816 "subtype": "NVMe" 00:07:46.816 } 00:07:46.816 ] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.816 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.817 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.817 rmmod nvme_tcp 00:07:47.074 rmmod nvme_fabrics 00:07:47.074 rmmod nvme_keyring 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66148 ']' 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66148 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66148 ']' 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66148 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66148 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.074 killing process with pid 66148 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66148' 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66148 00:07:47.074 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66148 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:47.332 00:07:47.332 real 0m2.353s 00:07:47.332 user 0m6.500s 00:07:47.332 sys 0m0.616s 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.332 19:36:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:47.332 ************************************ 00:07:47.332 END TEST nvmf_target_discovery 00:07:47.332 ************************************ 00:07:47.332 19:36:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:47.332 19:36:12 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:47.332 19:36:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.332 19:36:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.332 19:36:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.332 ************************************ 00:07:47.332 START TEST nvmf_referrals 00:07:47.332 ************************************ 00:07:47.332 19:36:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:47.332 * Looking for test storage... 00:07:47.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:47.332 Cannot find device "nvmf_tgt_br" 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:47.332 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.591 Cannot find device "nvmf_tgt_br2" 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:47.591 Cannot find device "nvmf_tgt_br" 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:47.591 Cannot find device "nvmf_tgt_br2" 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:47.591 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.592 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.593 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:47.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:07:47.860 00:07:47.860 --- 10.0.0.2 ping statistics --- 00:07:47.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.860 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:47.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:47.860 00:07:47.860 --- 10.0.0.3 ping statistics --- 00:07:47.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.860 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:47.860 00:07:47.860 --- 10.0.0.1 ping statistics --- 00:07:47.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.860 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66371 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66371 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66371 ']' 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.860 19:36:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.860 [2024-07-15 19:36:13.502516] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:47.860 [2024-07-15 19:36:13.503130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.117 [2024-07-15 19:36:13.644523] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.117 [2024-07-15 19:36:13.769185] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.117 [2024-07-15 19:36:13.769251] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.117 [2024-07-15 19:36:13.769265] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.117 [2024-07-15 19:36:13.769277] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.117 [2024-07-15 19:36:13.769286] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.117 [2024-07-15 19:36:13.769453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.117 [2024-07-15 19:36:13.769702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.117 [2024-07-15 19:36:13.770186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.117 [2024-07-15 19:36:13.770213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 [2024-07-15 19:36:14.596451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 [2024-07-15 19:36:14.626088] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.046 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.303 19:36:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.303 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.560 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.817 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:50.074 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.075 rmmod nvme_tcp 00:07:50.075 rmmod nvme_fabrics 00:07:50.075 rmmod nvme_keyring 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66371 ']' 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66371 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66371 ']' 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66371 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66371 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66371' 00:07:50.075 killing process with pid 66371 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66371 00:07:50.075 19:36:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66371 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.332 19:36:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.589 19:36:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:50.589 00:07:50.589 real 0m3.153s 00:07:50.589 user 0m10.192s 00:07:50.589 sys 0m0.859s 00:07:50.589 19:36:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.589 19:36:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.589 ************************************ 00:07:50.589 END TEST nvmf_referrals 00:07:50.589 ************************************ 00:07:50.589 19:36:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:50.589 19:36:16 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:50.589 19:36:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.590 19:36:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.590 19:36:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.590 ************************************ 00:07:50.590 START TEST nvmf_connect_disconnect 00:07:50.590 ************************************ 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:50.590 * Looking for test storage... 00:07:50.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:50.590 Cannot find device "nvmf_tgt_br" 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:50.590 Cannot find device "nvmf_tgt_br2" 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:50.590 Cannot find device "nvmf_tgt_br" 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:50.590 Cannot find device "nvmf_tgt_br2" 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:50.590 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:50.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:07:50.847 00:07:50.847 --- 10.0.0.2 ping statistics --- 00:07:50.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.847 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:50.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:50.847 00:07:50.847 --- 10.0.0.3 ping statistics --- 00:07:50.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.847 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:50.847 00:07:50.847 --- 10.0.0.1 ping statistics --- 00:07:50.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.847 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.847 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66674 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66674 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66674 ']' 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.104 19:36:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.104 [2024-07-15 19:36:16.725229] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:07:51.104 [2024-07-15 19:36:16.725329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.104 [2024-07-15 19:36:16.868217] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.360 [2024-07-15 19:36:17.003862] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.360 [2024-07-15 19:36:17.003935] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.360 [2024-07-15 19:36:17.003950] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.360 [2024-07-15 19:36:17.003962] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.360 [2024-07-15 19:36:17.003971] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.360 [2024-07-15 19:36:17.004153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.360 [2024-07-15 19:36:17.004267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.360 [2024-07-15 19:36:17.004872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.360 [2024-07-15 19:36:17.004908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.923 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.923 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:51.923 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.923 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.923 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.179 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.179 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.180 [2024-07-15 19:36:17.729526] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.180 [2024-07-15 19:36:17.800075] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:52.180 19:36:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:54.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.681 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:03.681 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:03.681 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.681 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:03.681 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.681 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:03.681 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.682 rmmod nvme_tcp 00:08:03.682 rmmod nvme_fabrics 00:08:03.682 rmmod nvme_keyring 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66674 ']' 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66674 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66674 ']' 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66674 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66674 00:08:03.682 killing process with pid 66674 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66674' 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66674 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66674 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:03.682 00:08:03.682 real 0m13.282s 00:08:03.682 user 0m48.709s 00:08:03.682 sys 0m1.916s 00:08:03.682 ************************************ 00:08:03.682 END TEST nvmf_connect_disconnect 00:08:03.682 ************************************ 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.682 19:36:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.940 19:36:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:03.940 19:36:29 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:03.940 19:36:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.940 19:36:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.940 19:36:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.940 ************************************ 00:08:03.940 START TEST nvmf_multitarget 00:08:03.940 ************************************ 00:08:03.940 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:03.940 * Looking for test storage... 00:08:03.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:03.941 Cannot find device "nvmf_tgt_br" 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.941 Cannot find device "nvmf_tgt_br2" 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:03.941 Cannot find device "nvmf_tgt_br" 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:03.941 Cannot find device "nvmf_tgt_br2" 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:03.941 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:04.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:04.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:04.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:08:04.200 00:08:04.200 --- 10.0.0.2 ping statistics --- 00:08:04.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.200 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:04.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:04.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:08:04.200 00:08:04.200 --- 10.0.0.3 ping statistics --- 00:08:04.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.200 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:04.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:04.200 00:08:04.200 --- 10.0.0.1 ping statistics --- 00:08:04.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.200 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.200 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67077 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67077 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67077 ']' 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.201 19:36:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.458 [2024-07-15 19:36:29.989017] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:08:04.458 [2024-07-15 19:36:29.989130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.458 [2024-07-15 19:36:30.125334] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.716 [2024-07-15 19:36:30.248588] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.716 [2024-07-15 19:36:30.248866] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.716 [2024-07-15 19:36:30.248999] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.716 [2024-07-15 19:36:30.249234] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.716 [2024-07-15 19:36:30.249410] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.716 [2024-07-15 19:36:30.249539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.716 [2024-07-15 19:36:30.249712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.716 [2024-07-15 19:36:30.249852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.716 [2024-07-15 19:36:30.249859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:04.716 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:04.974 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:04.974 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:04.974 "nvmf_tgt_1" 00:08:04.974 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:05.232 "nvmf_tgt_2" 00:08:05.233 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:05.233 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:05.233 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:05.233 19:36:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:05.491 true 00:08:05.491 19:36:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:05.491 true 00:08:05.491 19:36:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:05.491 19:36:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.750 rmmod nvme_tcp 00:08:05.750 rmmod nvme_fabrics 00:08:05.750 rmmod nvme_keyring 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67077 ']' 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67077 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67077 ']' 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67077 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67077 00:08:05.750 killing process with pid 67077 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67077' 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67077 00:08:05.750 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67077 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:06.009 ************************************ 00:08:06.009 END TEST nvmf_multitarget 00:08:06.009 ************************************ 00:08:06.009 00:08:06.009 real 0m2.233s 00:08:06.009 user 0m6.775s 00:08:06.009 sys 0m0.633s 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.009 19:36:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:06.009 19:36:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:06.009 19:36:31 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:06.009 19:36:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.009 19:36:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.009 19:36:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:06.009 ************************************ 00:08:06.009 START TEST nvmf_rpc 00:08:06.009 ************************************ 00:08:06.009 19:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:06.269 * Looking for test storage... 00:08:06.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:06.269 Cannot find device "nvmf_tgt_br" 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.269 Cannot find device "nvmf_tgt_br2" 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:06.269 Cannot find device "nvmf_tgt_br" 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:06.269 Cannot find device "nvmf_tgt_br2" 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:08:06.269 19:36:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:06.269 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:06.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:08:06.528 00:08:06.528 --- 10.0.0.2 ping statistics --- 00:08:06.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.528 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:06.528 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:06.528 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:06.528 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:06.528 00:08:06.528 --- 10.0.0.3 ping statistics --- 00:08:06.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.529 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:06.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:06.529 00:08:06.529 --- 10.0.0.1 ping statistics --- 00:08:06.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.529 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.529 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67296 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67296 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67296 ']' 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.787 19:36:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.787 [2024-07-15 19:36:32.382744] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:08:06.787 [2024-07-15 19:36:32.382850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.787 [2024-07-15 19:36:32.527192] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.045 [2024-07-15 19:36:32.676332] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.045 [2024-07-15 19:36:32.676671] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.045 [2024-07-15 19:36:32.676899] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.045 [2024-07-15 19:36:32.677190] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.045 [2024-07-15 19:36:32.677361] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.045 [2024-07-15 19:36:32.677728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.045 [2024-07-15 19:36:32.677862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.045 [2024-07-15 19:36:32.677941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.045 [2024-07-15 19:36:32.677947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:07.978 "poll_groups": [ 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_000", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [] 00:08:07.978 }, 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_001", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [] 00:08:07.978 }, 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_002", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [] 00:08:07.978 }, 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_003", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [] 00:08:07.978 } 00:08:07.978 ], 00:08:07.978 "tick_rate": 2200000000 00:08:07.978 }' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.978 [2024-07-15 19:36:33.551746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:07.978 "poll_groups": [ 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_000", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [ 00:08:07.978 { 00:08:07.978 "trtype": "TCP" 00:08:07.978 } 00:08:07.978 ] 00:08:07.978 }, 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_001", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [ 00:08:07.978 { 00:08:07.978 "trtype": "TCP" 00:08:07.978 } 00:08:07.978 ] 00:08:07.978 }, 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_002", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [ 00:08:07.978 { 00:08:07.978 "trtype": "TCP" 00:08:07.978 } 00:08:07.978 ] 00:08:07.978 }, 00:08:07.978 { 00:08:07.978 "admin_qpairs": 0, 00:08:07.978 "completed_nvme_io": 0, 00:08:07.978 "current_admin_qpairs": 0, 00:08:07.978 "current_io_qpairs": 0, 00:08:07.978 "io_qpairs": 0, 00:08:07.978 "name": "nvmf_tgt_poll_group_003", 00:08:07.978 "pending_bdev_io": 0, 00:08:07.978 "transports": [ 00:08:07.978 { 00:08:07.978 "trtype": "TCP" 00:08:07.978 } 00:08:07.978 ] 00:08:07.978 } 00:08:07.978 ], 00:08:07.978 "tick_rate": 2200000000 00:08:07.978 }' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.978 Malloc1 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.978 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.235 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.235 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.235 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.235 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.236 [2024-07-15 19:36:33.762018] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -a 10.0.0.2 -s 4420 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -a 10.0.0.2 -s 4420 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -a 10.0.0.2 -s 4420 00:08:08.236 [2024-07-15 19:36:33.790421] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb' 00:08:08.236 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:08.236 could not add new controller: failed to write to nvme-fabrics device 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:08.236 19:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:10.761 19:36:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:10.761 19:36:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:10.761 19:36:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:10.761 19:36:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:10.761 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.762 [2024-07-15 19:36:36.101455] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb' 00:08:10.762 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:10.762 could not add new controller: failed to write to nvme-fabrics device 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:10.762 19:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.662 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.920 [2024-07-15 19:36:38.500337] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:12.920 19:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 [2024-07-15 19:36:40.811135] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:15.476 19:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:17.375 19:36:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.375 [2024-07-15 19:36:43.114271] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.375 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:17.634 19:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:17.634 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:17.634 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:17.634 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:17.634 19:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:19.536 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:19.536 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:19.536 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.794 [2024-07-15 19:36:45.417355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.794 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.795 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.053 19:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:20.053 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:20.053 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:20.053 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:20.053 19:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:21.954 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:21.954 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:21.954 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.954 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:21.954 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.954 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:21.954 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 [2024-07-15 19:36:47.816540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.213 19:36:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.471 19:36:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.471 19:36:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:22.471 19:36:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.471 19:36:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:22.471 19:36:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:24.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.374 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 [2024-07-15 19:36:50.123607] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.375 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.633 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 [2024-07-15 19:36:50.171589] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 [2024-07-15 19:36:50.219616] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 [2024-07-15 19:36:50.267685] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 [2024-07-15 19:36:50.315730] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:24.634 "poll_groups": [ 00:08:24.634 { 00:08:24.634 "admin_qpairs": 2, 00:08:24.634 "completed_nvme_io": 65, 00:08:24.634 "current_admin_qpairs": 0, 00:08:24.634 "current_io_qpairs": 0, 00:08:24.634 "io_qpairs": 16, 00:08:24.634 "name": "nvmf_tgt_poll_group_000", 00:08:24.634 "pending_bdev_io": 0, 00:08:24.634 "transports": [ 00:08:24.634 { 00:08:24.634 "trtype": "TCP" 00:08:24.634 } 00:08:24.634 ] 00:08:24.634 }, 00:08:24.634 { 00:08:24.634 "admin_qpairs": 3, 00:08:24.634 "completed_nvme_io": 120, 00:08:24.634 "current_admin_qpairs": 0, 00:08:24.634 "current_io_qpairs": 0, 00:08:24.634 "io_qpairs": 17, 00:08:24.634 "name": "nvmf_tgt_poll_group_001", 00:08:24.634 "pending_bdev_io": 0, 00:08:24.634 "transports": [ 00:08:24.634 { 00:08:24.634 "trtype": "TCP" 00:08:24.634 } 00:08:24.634 ] 00:08:24.634 }, 00:08:24.634 { 00:08:24.634 "admin_qpairs": 1, 00:08:24.634 "completed_nvme_io": 167, 00:08:24.634 "current_admin_qpairs": 0, 00:08:24.634 "current_io_qpairs": 0, 00:08:24.634 "io_qpairs": 19, 00:08:24.634 "name": "nvmf_tgt_poll_group_002", 00:08:24.634 "pending_bdev_io": 0, 00:08:24.634 "transports": [ 00:08:24.634 { 00:08:24.634 "trtype": "TCP" 00:08:24.634 } 00:08:24.634 ] 00:08:24.634 }, 00:08:24.634 { 00:08:24.634 "admin_qpairs": 1, 00:08:24.634 "completed_nvme_io": 68, 00:08:24.634 "current_admin_qpairs": 0, 00:08:24.634 "current_io_qpairs": 0, 00:08:24.634 "io_qpairs": 18, 00:08:24.634 "name": "nvmf_tgt_poll_group_003", 00:08:24.634 "pending_bdev_io": 0, 00:08:24.634 "transports": [ 00:08:24.634 { 00:08:24.634 "trtype": "TCP" 00:08:24.634 } 00:08:24.634 ] 00:08:24.634 } 00:08:24.634 ], 00:08:24.634 "tick_rate": 2200000000 00:08:24.634 }' 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:24.634 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:24.892 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:24.892 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:24.892 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:24.892 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:24.892 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.893 rmmod nvme_tcp 00:08:24.893 rmmod nvme_fabrics 00:08:24.893 rmmod nvme_keyring 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67296 ']' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67296 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67296 ']' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67296 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67296 00:08:24.893 killing process with pid 67296 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67296' 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67296 00:08:24.893 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67296 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:25.151 00:08:25.151 real 0m19.088s 00:08:25.151 user 1m11.227s 00:08:25.151 sys 0m2.695s 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.151 ************************************ 00:08:25.151 END TEST nvmf_rpc 00:08:25.151 ************************************ 00:08:25.151 19:36:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.151 19:36:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.151 19:36:50 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:25.151 19:36:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.151 19:36:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.151 19:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.410 ************************************ 00:08:25.410 START TEST nvmf_invalid 00:08:25.410 ************************************ 00:08:25.410 19:36:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:25.410 * Looking for test storage... 00:08:25.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:25.410 Cannot find device "nvmf_tgt_br" 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:25.410 Cannot find device "nvmf_tgt_br2" 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:25.410 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:25.410 Cannot find device "nvmf_tgt_br" 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:25.411 Cannot find device "nvmf_tgt_br2" 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:25.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:25.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:25.411 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:25.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:25.669 00:08:25.669 --- 10.0.0.2 ping statistics --- 00:08:25.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.669 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:25.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:25.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:25.669 00:08:25.669 --- 10.0.0.3 ping statistics --- 00:08:25.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.669 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:25.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:25.669 00:08:25.669 --- 10.0.0.1 ping statistics --- 00:08:25.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.669 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67806 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67806 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67806 ']' 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.669 19:36:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:25.928 [2024-07-15 19:36:51.458619] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:08:25.928 [2024-07-15 19:36:51.458693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.928 [2024-07-15 19:36:51.595053] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.186 [2024-07-15 19:36:51.729406] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.186 [2024-07-15 19:36:51.729763] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.186 [2024-07-15 19:36:51.729926] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.186 [2024-07-15 19:36:51.730241] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.186 [2024-07-15 19:36:51.730293] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.186 [2024-07-15 19:36:51.730517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.186 [2024-07-15 19:36:51.730625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.186 [2024-07-15 19:36:51.731350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.186 [2024-07-15 19:36:51.731362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:26.751 19:36:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13567 00:08:27.009 [2024-07-15 19:36:52.769462] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:27.267 19:36:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 19:36:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13567 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:27.267 request: 00:08:27.267 { 00:08:27.267 "method": "nvmf_create_subsystem", 00:08:27.267 "params": { 00:08:27.267 "nqn": "nqn.2016-06.io.spdk:cnode13567", 00:08:27.267 "tgt_name": "foobar" 00:08:27.267 } 00:08:27.267 } 00:08:27.267 Got JSON-RPC error response 00:08:27.267 GoRPCClient: error on JSON-RPC call' 00:08:27.267 19:36:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 19:36:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13567 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:27.267 request: 00:08:27.267 { 00:08:27.267 "method": "nvmf_create_subsystem", 00:08:27.267 "params": { 00:08:27.268 "nqn": "nqn.2016-06.io.spdk:cnode13567", 00:08:27.268 "tgt_name": "foobar" 00:08:27.268 } 00:08:27.268 } 00:08:27.268 Got JSON-RPC error response 00:08:27.268 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:27.268 19:36:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:27.268 19:36:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31267 00:08:27.526 [2024-07-15 19:36:53.073761] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31267: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:27.527 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 19:36:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31267 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:27.527 request: 00:08:27.527 { 00:08:27.527 "method": "nvmf_create_subsystem", 00:08:27.527 "params": { 00:08:27.527 "nqn": "nqn.2016-06.io.spdk:cnode31267", 00:08:27.527 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:27.527 } 00:08:27.527 } 00:08:27.527 Got JSON-RPC error response 00:08:27.527 GoRPCClient: error on JSON-RPC call' 00:08:27.527 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 19:36:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31267 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:27.527 request: 00:08:27.527 { 00:08:27.527 "method": "nvmf_create_subsystem", 00:08:27.527 "params": { 00:08:27.527 "nqn": "nqn.2016-06.io.spdk:cnode31267", 00:08:27.527 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:27.527 } 00:08:27.527 } 00:08:27.527 Got JSON-RPC error response 00:08:27.527 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:27.527 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:27.527 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1623 00:08:27.784 [2024-07-15 19:36:53.386061] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1623: invalid model number 'SPDK_Controller' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 19:36:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode1623], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:27.784 request: 00:08:27.784 { 00:08:27.784 "method": "nvmf_create_subsystem", 00:08:27.784 "params": { 00:08:27.784 "nqn": "nqn.2016-06.io.spdk:cnode1623", 00:08:27.784 "model_number": "SPDK_Controller\u001f" 00:08:27.784 } 00:08:27.784 } 00:08:27.784 Got JSON-RPC error response 00:08:27.784 GoRPCClient: error on JSON-RPC call' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 19:36:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode1623], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:27.784 request: 00:08:27.784 { 00:08:27.784 "method": "nvmf_create_subsystem", 00:08:27.784 "params": { 00:08:27.784 "nqn": "nqn.2016-06.io.spdk:cnode1623", 00:08:27.784 "model_number": "SPDK_Controller\u001f" 00:08:27.784 } 00:08:27.784 } 00:08:27.784 Got JSON-RPC error response 00:08:27.784 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'N<{e#;mqPFg[}UYO#t)?o' 00:08:27.784 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'N<{e#;mqPFg[}UYO#t)?o' nqn.2016-06.io.spdk:cnode24101 00:08:28.041 [2024-07-15 19:36:53.798420] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24101: invalid serial number 'N<{e#;mqPFg[}UYO#t)?o' 00:08:28.041 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 19:36:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24101 serial_number:N<{e#;mqPFg[}UYO#t)?o], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN N<{e#;mqPFg[}UYO#t)?o 00:08:28.041 request: 00:08:28.041 { 00:08:28.041 "method": "nvmf_create_subsystem", 00:08:28.041 "params": { 00:08:28.041 "nqn": "nqn.2016-06.io.spdk:cnode24101", 00:08:28.041 "serial_number": "N<{e#;mqPFg[}UYO#t)?o" 00:08:28.041 } 00:08:28.041 } 00:08:28.041 Got JSON-RPC error response 00:08:28.041 GoRPCClient: error on JSON-RPC call' 00:08:28.041 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 19:36:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24101 serial_number:N<{e#;mqPFg[}UYO#t)?o], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN N<{e#;mqPFg[}UYO#t)?o 00:08:28.041 request: 00:08:28.041 { 00:08:28.041 "method": "nvmf_create_subsystem", 00:08:28.041 "params": { 00:08:28.041 "nqn": "nqn.2016-06.io.spdk:cnode24101", 00:08:28.041 "serial_number": "N<{e#;mqPFg[}UYO#t)?o" 00:08:28.041 } 00:08:28.041 } 00:08:28.041 Got JSON-RPC error response 00:08:28.041 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.301 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '6\5/jNk4wv}j= aNJmJuh>x^GethZ!iT.R=?d=' 00:08:28.302 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '6\5/jNk4wv}j= aNJmJuh>x^GethZ!iT.R=?d=' nqn.2016-06.io.spdk:cnode6291 00:08:28.560 [2024-07-15 19:36:54.230822] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6291: invalid model number '6\5/jNk4wv}j= aNJmJuh>x^GethZ!iT.R=?d=' 00:08:28.560 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 19:36:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:6\5/jNk4wv}j= aNJmJuh>x^GethZ!iT.R=?d= nqn:nqn.2016-06.io.spdk:cnode6291], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 6\5/jNk4wv}j= aNJmJuh>x^GethZ!iT.R=?d= 00:08:28.560 request: 00:08:28.560 { 00:08:28.560 "method": "nvmf_create_subsystem", 00:08:28.560 "params": { 00:08:28.560 "nqn": "nqn.2016-06.io.spdk:cnode6291", 00:08:28.560 "model_number": "6\\5/jNk4wv}j= a\u007fNJmJuh>x^Geth\u007fZ!iT.R=?d=\u007f" 00:08:28.560 } 00:08:28.560 } 00:08:28.560 Got JSON-RPC error response 00:08:28.560 GoRPCClient: error on JSON-RPC call' 00:08:28.560 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 19:36:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:6\5/jNk4wv}j= aNJmJuh>x^GethZ!iT.R=?d= nqn:nqn.2016-06.io.spdk:cnode6291], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 6\5/jNk4wv}j= aNJmJuh>x^GethZ!iT.R=?d= 00:08:28.560 request: 00:08:28.560 { 00:08:28.560 "method": "nvmf_create_subsystem", 00:08:28.560 "params": { 00:08:28.560 "nqn": "nqn.2016-06.io.spdk:cnode6291", 00:08:28.560 "model_number": "6\\5/jNk4wv}j= a\u007fNJmJuh>x^Geth\u007fZ!iT.R=?d=\u007f" 00:08:28.560 } 00:08:28.560 } 00:08:28.560 Got JSON-RPC error response 00:08:28.560 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:28.560 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:28.819 [2024-07-15 19:36:54.471132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.819 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:29.083 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:29.083 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:29.083 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:29.083 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:29.083 19:36:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:29.351 [2024-07-15 19:36:55.053027] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:29.352 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:08:29.352 request: 00:08:29.352 { 00:08:29.352 "method": "nvmf_subsystem_remove_listener", 00:08:29.352 "params": { 00:08:29.352 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:29.352 "listen_address": { 00:08:29.352 "trtype": "tcp", 00:08:29.352 "traddr": "", 00:08:29.352 "trsvcid": "4421" 00:08:29.352 } 00:08:29.352 } 00:08:29.352 } 00:08:29.352 Got JSON-RPC error response 00:08:29.352 GoRPCClient: error on JSON-RPC call' 00:08:29.352 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:08:29.352 request: 00:08:29.352 { 00:08:29.352 "method": "nvmf_subsystem_remove_listener", 00:08:29.352 "params": { 00:08:29.352 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:29.352 "listen_address": { 00:08:29.352 "trtype": "tcp", 00:08:29.352 "traddr": "", 00:08:29.352 "trsvcid": "4421" 00:08:29.352 } 00:08:29.352 } 00:08:29.352 } 00:08:29.352 Got JSON-RPC error response 00:08:29.352 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:29.352 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12471 -i 0 00:08:29.609 [2024-07-15 19:36:55.301240] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12471: invalid cntlid range [0-65519] 00:08:29.609 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12471], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:08:29.609 request: 00:08:29.609 { 00:08:29.609 "method": "nvmf_create_subsystem", 00:08:29.609 "params": { 00:08:29.609 "nqn": "nqn.2016-06.io.spdk:cnode12471", 00:08:29.609 "min_cntlid": 0 00:08:29.609 } 00:08:29.609 } 00:08:29.609 Got JSON-RPC error response 00:08:29.609 GoRPCClient: error on JSON-RPC call' 00:08:29.609 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12471], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:08:29.609 request: 00:08:29.609 { 00:08:29.609 "method": "nvmf_create_subsystem", 00:08:29.609 "params": { 00:08:29.609 "nqn": "nqn.2016-06.io.spdk:cnode12471", 00:08:29.609 "min_cntlid": 0 00:08:29.609 } 00:08:29.609 } 00:08:29.609 Got JSON-RPC error response 00:08:29.609 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:29.609 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29167 -i 65520 00:08:29.867 [2024-07-15 19:36:55.593519] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29167: invalid cntlid range [65520-65519] 00:08:29.867 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29167], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:08:29.867 request: 00:08:29.867 { 00:08:29.867 "method": "nvmf_create_subsystem", 00:08:29.867 "params": { 00:08:29.867 "nqn": "nqn.2016-06.io.spdk:cnode29167", 00:08:29.867 "min_cntlid": 65520 00:08:29.867 } 00:08:29.867 } 00:08:29.867 Got JSON-RPC error response 00:08:29.867 GoRPCClient: error on JSON-RPC call' 00:08:29.867 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29167], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:08:29.867 request: 00:08:29.867 { 00:08:29.867 "method": "nvmf_create_subsystem", 00:08:29.867 "params": { 00:08:29.867 "nqn": "nqn.2016-06.io.spdk:cnode29167", 00:08:29.867 "min_cntlid": 65520 00:08:29.867 } 00:08:29.867 } 00:08:29.867 Got JSON-RPC error response 00:08:29.867 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:29.867 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12926 -I 0 00:08:30.124 [2024-07-15 19:36:55.881823] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12926: invalid cntlid range [1-0] 00:08:30.382 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12926], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:08:30.382 request: 00:08:30.382 { 00:08:30.382 "method": "nvmf_create_subsystem", 00:08:30.382 "params": { 00:08:30.382 "nqn": "nqn.2016-06.io.spdk:cnode12926", 00:08:30.382 "max_cntlid": 0 00:08:30.382 } 00:08:30.382 } 00:08:30.382 Got JSON-RPC error response 00:08:30.382 GoRPCClient: error on JSON-RPC call' 00:08:30.382 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 19:36:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12926], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:08:30.382 request: 00:08:30.382 { 00:08:30.382 "method": "nvmf_create_subsystem", 00:08:30.382 "params": { 00:08:30.382 "nqn": "nqn.2016-06.io.spdk:cnode12926", 00:08:30.382 "max_cntlid": 0 00:08:30.382 } 00:08:30.382 } 00:08:30.382 Got JSON-RPC error response 00:08:30.382 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:30.382 19:36:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19547 -I 65520 00:08:30.382 [2024-07-15 19:36:56.126019] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19547: invalid cntlid range [1-65520] 00:08:30.382 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 19:36:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode19547], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:08:30.382 request: 00:08:30.382 { 00:08:30.382 "method": "nvmf_create_subsystem", 00:08:30.382 "params": { 00:08:30.382 "nqn": "nqn.2016-06.io.spdk:cnode19547", 00:08:30.382 "max_cntlid": 65520 00:08:30.382 } 00:08:30.382 } 00:08:30.382 Got JSON-RPC error response 00:08:30.382 GoRPCClient: error on JSON-RPC call' 00:08:30.382 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 19:36:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode19547], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:08:30.382 request: 00:08:30.382 { 00:08:30.382 "method": "nvmf_create_subsystem", 00:08:30.382 "params": { 00:08:30.382 "nqn": "nqn.2016-06.io.spdk:cnode19547", 00:08:30.382 "max_cntlid": 65520 00:08:30.382 } 00:08:30.382 } 00:08:30.382 Got JSON-RPC error response 00:08:30.382 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:30.382 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4992 -i 6 -I 5 00:08:30.946 [2024-07-15 19:36:56.430341] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4992: invalid cntlid range [6-5] 00:08:30.946 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 19:36:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4992], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:08:30.946 request: 00:08:30.946 { 00:08:30.946 "method": "nvmf_create_subsystem", 00:08:30.946 "params": { 00:08:30.946 "nqn": "nqn.2016-06.io.spdk:cnode4992", 00:08:30.946 "min_cntlid": 6, 00:08:30.946 "max_cntlid": 5 00:08:30.946 } 00:08:30.946 } 00:08:30.946 Got JSON-RPC error response 00:08:30.946 GoRPCClient: error on JSON-RPC call' 00:08:30.946 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 19:36:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode4992], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:08:30.946 request: 00:08:30.946 { 00:08:30.946 "method": "nvmf_create_subsystem", 00:08:30.946 "params": { 00:08:30.946 "nqn": "nqn.2016-06.io.spdk:cnode4992", 00:08:30.946 "min_cntlid": 6, 00:08:30.946 "max_cntlid": 5 00:08:30.946 } 00:08:30.947 } 00:08:30.947 Got JSON-RPC error response 00:08:30.947 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:30.947 { 00:08:30.947 "name": "foobar", 00:08:30.947 "method": "nvmf_delete_target", 00:08:30.947 "req_id": 1 00:08:30.947 } 00:08:30.947 Got JSON-RPC error response 00:08:30.947 response: 00:08:30.947 { 00:08:30.947 "code": -32602, 00:08:30.947 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:30.947 }' 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:30.947 { 00:08:30.947 "name": "foobar", 00:08:30.947 "method": "nvmf_delete_target", 00:08:30.947 "req_id": 1 00:08:30.947 } 00:08:30.947 Got JSON-RPC error response 00:08:30.947 response: 00:08:30.947 { 00:08:30.947 "code": -32602, 00:08:30.947 "message": "The specified target doesn't exist, cannot delete it." 00:08:30.947 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.947 rmmod nvme_tcp 00:08:30.947 rmmod nvme_fabrics 00:08:30.947 rmmod nvme_keyring 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67806 ']' 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67806 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67806 ']' 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67806 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67806 00:08:30.947 killing process with pid 67806 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67806' 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67806 00:08:30.947 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67806 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.203 ************************************ 00:08:31.203 END TEST nvmf_invalid 00:08:31.203 ************************************ 00:08:31.203 00:08:31.203 real 0m6.034s 00:08:31.203 user 0m24.150s 00:08:31.203 sys 0m1.297s 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.203 19:36:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:31.460 19:36:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:31.460 19:36:57 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:31.460 19:36:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.460 19:36:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.460 19:36:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.460 ************************************ 00:08:31.460 START TEST nvmf_abort 00:08:31.460 ************************************ 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:31.460 * Looking for test storage... 00:08:31.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.460 Cannot find device "nvmf_tgt_br" 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.460 Cannot find device "nvmf_tgt_br2" 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.460 Cannot find device "nvmf_tgt_br" 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.460 Cannot find device "nvmf_tgt_br2" 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.460 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:08:31.717 00:08:31.717 --- 10.0.0.2 ping statistics --- 00:08:31.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.717 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:31.717 00:08:31.717 --- 10.0.0.3 ping statistics --- 00:08:31.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.717 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:31.717 00:08:31.717 --- 10.0.0.1 ping statistics --- 00:08:31.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.717 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68320 00:08:31.717 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68320 00:08:31.718 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68320 ']' 00:08:31.718 19:36:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:31.718 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.718 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.718 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.718 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.718 19:36:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.974 [2024-07-15 19:36:57.526229] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:08:31.974 [2024-07-15 19:36:57.526306] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.974 [2024-07-15 19:36:57.660801] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.231 [2024-07-15 19:36:57.776865] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.231 [2024-07-15 19:36:57.777417] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.231 [2024-07-15 19:36:57.777514] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.231 [2024-07-15 19:36:57.777725] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.231 [2024-07-15 19:36:57.777833] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.231 [2024-07-15 19:36:57.778062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.231 [2024-07-15 19:36:57.778263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.231 [2024-07-15 19:36:57.778413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.796 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.796 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.797 [2024-07-15 19:36:58.513394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.797 Malloc0 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.797 Delay0 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.797 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.054 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.054 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.055 [2024-07-15 19:36:58.592760] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.055 19:36:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:33.055 [2024-07-15 19:36:58.773015] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:35.579 Initializing NVMe Controllers 00:08:35.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:35.579 controller IO queue size 128 less than required 00:08:35.579 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:35.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:35.579 Initialization complete. Launching workers. 00:08:35.579 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30358 00:08:35.579 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30419, failed to submit 62 00:08:35.579 success 30362, unsuccess 57, failed 0 00:08:35.579 19:37:00 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.579 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.579 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:35.579 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.580 rmmod nvme_tcp 00:08:35.580 rmmod nvme_fabrics 00:08:35.580 rmmod nvme_keyring 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68320 ']' 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68320 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68320 ']' 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68320 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68320 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68320' 00:08:35.580 killing process with pid 68320 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68320 00:08:35.580 19:37:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68320 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:35.580 00:08:35.580 real 0m4.234s 00:08:35.580 user 0m12.192s 00:08:35.580 sys 0m0.997s 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.580 19:37:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:35.580 ************************************ 00:08:35.580 END TEST nvmf_abort 00:08:35.580 ************************************ 00:08:35.580 19:37:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:35.580 19:37:01 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:35.580 19:37:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.580 19:37:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.580 19:37:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.580 ************************************ 00:08:35.580 START TEST nvmf_ns_hotplug_stress 00:08:35.580 ************************************ 00:08:35.580 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:35.839 * Looking for test storage... 00:08:35.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:35.839 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:35.840 Cannot find device "nvmf_tgt_br" 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.840 Cannot find device "nvmf_tgt_br2" 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:35.840 Cannot find device "nvmf_tgt_br" 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:35.840 Cannot find device "nvmf_tgt_br2" 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:35.840 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:36.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:36.099 00:08:36.099 --- 10.0.0.2 ping statistics --- 00:08:36.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.099 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:36.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:36.099 00:08:36.099 --- 10.0.0.3 ping statistics --- 00:08:36.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.099 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:08:36.099 00:08:36.099 --- 10.0.0.1 ping statistics --- 00:08:36.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.099 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.099 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68582 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68582 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68582 ']' 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.100 19:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.100 [2024-07-15 19:37:01.800201] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:08:36.100 [2024-07-15 19:37:01.800296] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.359 [2024-07-15 19:37:01.936834] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.359 [2024-07-15 19:37:02.063512] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.359 [2024-07-15 19:37:02.063567] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.359 [2024-07-15 19:37:02.063580] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.359 [2024-07-15 19:37:02.063589] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.359 [2024-07-15 19:37:02.063597] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.359 [2024-07-15 19:37:02.063755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.359 [2024-07-15 19:37:02.064705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.359 [2024-07-15 19:37:02.064723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:37.293 19:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:37.591 [2024-07-15 19:37:03.162678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.591 19:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:37.890 19:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.148 [2024-07-15 19:37:03.704971] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.148 19:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.406 19:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:38.663 Malloc0 00:08:38.663 19:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.921 Delay0 00:08:38.921 19:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.180 19:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:39.438 NULL1 00:08:39.438 19:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:39.696 19:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68716 00:08:39.696 19:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:39.696 19:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:39.696 19:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.073 Read completed with error (sct=0, sc=11) 00:08:41.073 19:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.331 19:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:41.331 19:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:41.588 true 00:08:41.589 19:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:41.589 19:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.522 19:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.522 19:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:42.522 19:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:42.779 true 00:08:42.779 19:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:42.779 19:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.037 19:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.295 19:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:43.295 19:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:43.553 true 00:08:43.553 19:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:43.553 19:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.119 19:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.119 19:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:44.119 19:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:44.685 true 00:08:44.685 19:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:44.685 19:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.252 19:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.510 19:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:45.510 19:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:45.768 true 00:08:45.768 19:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:45.768 19:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.026 19:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.284 19:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:46.284 19:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:46.542 true 00:08:46.542 19:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:46.542 19:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.106 19:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.106 19:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:47.107 19:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:47.363 true 00:08:47.620 19:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:47.620 19:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.553 19:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.553 19:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:48.553 19:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:48.811 true 00:08:48.811 19:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:48.811 19:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.071 19:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.329 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:49.329 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:49.587 true 00:08:49.587 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:49.587 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.845 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.103 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:50.103 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:50.361 true 00:08:50.361 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:50.361 19:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.335 19:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.593 19:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:51.593 19:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:51.852 true 00:08:51.852 19:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:51.852 19:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.786 19:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.044 19:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:53.044 19:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:53.302 true 00:08:53.302 19:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:53.302 19:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.560 19:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.818 19:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:53.818 19:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:54.076 true 00:08:54.076 19:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:54.076 19:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.334 19:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.592 19:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:54.592 19:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:54.862 true 00:08:54.862 19:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:54.862 19:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.816 19:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.074 19:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:56.074 19:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:56.331 true 00:08:56.331 19:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:56.331 19:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.588 19:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.844 19:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:56.844 19:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:56.844 true 00:08:57.101 19:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:57.101 19:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.101 19:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.358 19:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:57.358 19:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:57.614 true 00:08:57.614 19:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:57.614 19:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.981 19:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.981 19:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:58.981 19:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:59.239 true 00:08:59.239 19:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:08:59.239 19:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.171 19:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.428 19:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:00.428 19:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:00.686 true 00:09:00.686 19:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:00.686 19:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.944 19:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.202 19:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:01.202 19:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:01.460 true 00:09:01.460 19:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:01.460 19:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.776 19:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.776 19:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:01.776 19:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:02.033 true 00:09:02.033 19:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:02.033 19:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.407 19:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.407 19:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:03.407 19:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:03.665 true 00:09:03.665 19:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:03.665 19:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.924 19:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.182 19:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:04.182 19:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:04.441 true 00:09:04.441 19:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:04.441 19:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.374 19:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.374 19:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:05.374 19:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:05.632 true 00:09:05.633 19:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:05.633 19:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.891 19:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.149 19:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:06.149 19:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:06.406 true 00:09:06.406 19:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:06.406 19:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.664 19:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.922 19:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:06.922 19:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:07.181 true 00:09:07.181 19:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:07.181 19:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.115 19:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.392 19:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:08.392 19:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:08.668 true 00:09:08.668 19:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:08.668 19:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.600 19:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.857 19:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:09.857 19:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:09.857 true 00:09:09.857 Initializing NVMe Controllers 00:09:09.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.857 Controller IO queue size 128, less than required. 00:09:09.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:09.858 Controller IO queue size 128, less than required. 00:09:09.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:09.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:09.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:09.858 Initialization complete. Launching workers. 00:09:09.858 ======================================================== 00:09:09.858 Latency(us) 00:09:09.858 Device Information : IOPS MiB/s Average min max 00:09:09.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 917.94 0.45 70142.79 3691.14 1029348.77 00:09:09.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9388.05 4.58 13635.56 3695.11 559649.04 00:09:09.858 ======================================================== 00:09:09.858 Total : 10305.98 5.03 18668.56 3691.14 1029348.77 00:09:09.858 00:09:09.858 19:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:09.858 19:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.115 19:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.682 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:10.682 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:10.682 true 00:09:10.682 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68716 00:09:10.682 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68716) - No such process 00:09:10.682 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68716 00:09:10.682 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.940 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:11.198 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:11.199 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:11.199 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:11.199 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.199 19:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:11.458 null0 00:09:11.458 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.458 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.458 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:11.775 null1 00:09:11.775 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.775 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.775 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:12.034 null2 00:09:12.034 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.034 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.034 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:12.293 null3 00:09:12.293 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.293 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.293 19:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:12.551 null4 00:09:12.551 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.551 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.551 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:12.809 null5 00:09:12.809 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.809 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.809 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:13.068 null6 00:09:13.068 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.068 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.068 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:13.327 null7 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.327 19:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69758 69759 69762 69763 69766 69767 69769 69772 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.327 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.586 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.586 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.586 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.586 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.586 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.586 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.845 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.103 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.104 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.362 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.362 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.362 19:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.362 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.621 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.879 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.880 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.137 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.394 19:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.394 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.650 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.921 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.921 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.921 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.921 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.922 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.922 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.922 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.922 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.922 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.179 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.438 19:37:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.438 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.696 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.954 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.211 19:37:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.469 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.726 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.983 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.241 19:37:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.498 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.499 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.499 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.499 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.756 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.014 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.014 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.014 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.015 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.273 rmmod nvme_tcp 00:09:19.273 rmmod nvme_fabrics 00:09:19.273 rmmod nvme_keyring 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68582 ']' 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68582 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68582 ']' 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68582 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68582 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:19.273 killing process with pid 68582 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68582' 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68582 00:09:19.273 19:37:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68582 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:19.543 00:09:19.543 real 0m43.935s 00:09:19.543 user 3m31.746s 00:09:19.543 sys 0m13.456s 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.543 19:37:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.543 ************************************ 00:09:19.543 END TEST nvmf_ns_hotplug_stress 00:09:19.543 ************************************ 00:09:19.543 19:37:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:19.543 19:37:45 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:19.543 19:37:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.543 19:37:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.543 19:37:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.543 ************************************ 00:09:19.543 START TEST nvmf_connect_stress 00:09:19.543 ************************************ 00:09:19.543 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:19.830 * Looking for test storage... 00:09:19.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:19.830 Cannot find device "nvmf_tgt_br" 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.830 Cannot find device "nvmf_tgt_br2" 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:19.830 Cannot find device "nvmf_tgt_br" 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:19.830 Cannot find device "nvmf_tgt_br2" 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.830 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:20.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:20.089 00:09:20.089 --- 10.0.0.2 ping statistics --- 00:09:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.089 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:20.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:09:20.089 00:09:20.089 --- 10.0.0.3 ping statistics --- 00:09:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.089 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:20.089 00:09:20.089 --- 10.0.0.1 ping statistics --- 00:09:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.089 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71091 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71091 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71091 ']' 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.089 19:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.089 [2024-07-15 19:37:45.825425] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:09:20.089 [2024-07-15 19:37:45.825539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.347 [2024-07-15 19:37:45.965768] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.347 [2024-07-15 19:37:46.112774] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.347 [2024-07-15 19:37:46.112850] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.347 [2024-07-15 19:37:46.112865] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.347 [2024-07-15 19:37:46.112876] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.347 [2024-07-15 19:37:46.112885] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.347 [2024-07-15 19:37:46.113063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.347 [2024-07-15 19:37:46.113215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.347 [2024-07-15 19:37:46.113788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.282 [2024-07-15 19:37:46.845812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.282 [2024-07-15 19:37:46.867709] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.282 NULL1 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71143 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.282 19:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.540 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.540 19:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:21.540 19:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.540 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.540 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.105 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.105 19:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:22.105 19:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.105 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.105 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.363 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.363 19:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:22.363 19:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.363 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.363 19:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.621 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.621 19:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:22.621 19:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.621 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.621 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.879 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.879 19:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:22.879 19:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.879 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.879 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.137 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.137 19:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:23.137 19:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.137 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.137 19:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.704 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.704 19:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:23.704 19:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.704 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.704 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.963 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.963 19:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:23.963 19:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.963 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.963 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.221 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.221 19:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:24.221 19:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.221 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.221 19:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.480 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.480 19:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:24.480 19:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.480 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.480 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.739 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.739 19:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:24.739 19:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.739 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.739 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.306 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.306 19:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:25.306 19:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.306 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.306 19:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.564 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.564 19:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:25.564 19:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.564 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.564 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.823 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.823 19:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:25.823 19:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.823 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.823 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.081 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.081 19:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:26.081 19:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.081 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.081 19:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.339 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.339 19:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:26.339 19:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.339 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.339 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.905 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.905 19:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:26.905 19:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.905 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.905 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.163 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.163 19:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:27.163 19:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.163 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.163 19:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.423 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.423 19:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:27.423 19:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.423 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.423 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.695 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.695 19:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:27.695 19:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.695 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.695 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:27.958 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.958 19:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:27.958 19:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:27.958 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.958 19:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.534 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.534 19:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:28.534 19:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.534 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.534 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:28.791 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.791 19:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:28.791 19:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:28.791 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.791 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.049 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.049 19:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:29.049 19:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.049 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.049 19:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.306 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.306 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:29.306 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.306 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.306 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.563 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.563 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:29.563 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:29.563 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.563 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.129 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.129 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:30.129 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.129 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.129 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.389 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.389 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:30.389 19:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.389 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.389 19:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.647 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.647 19:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:30.647 19:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.647 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.647 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.905 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.905 19:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:30.905 19:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:30.905 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.905 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.163 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.163 19:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:31.163 19:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:31.163 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.163 19:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.420 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71143 00:09:31.677 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71143) - No such process 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71143 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.677 rmmod nvme_tcp 00:09:31.677 rmmod nvme_fabrics 00:09:31.677 rmmod nvme_keyring 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71091 ']' 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71091 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71091 ']' 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71091 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71091 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:31.677 killing process with pid 71091 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71091' 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71091 00:09:31.677 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71091 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:31.934 00:09:31.934 real 0m12.347s 00:09:31.934 user 0m41.034s 00:09:31.934 sys 0m3.257s 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.934 ************************************ 00:09:31.934 19:37:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.934 END TEST nvmf_connect_stress 00:09:31.934 ************************************ 00:09:31.934 19:37:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:31.934 19:37:57 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:31.934 19:37:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:31.934 19:37:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.934 19:37:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.934 ************************************ 00:09:31.934 START TEST nvmf_fused_ordering 00:09:31.934 ************************************ 00:09:31.934 19:37:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:32.192 * Looking for test storage... 00:09:32.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:32.192 Cannot find device "nvmf_tgt_br" 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.192 Cannot find device "nvmf_tgt_br2" 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:32.192 Cannot find device "nvmf_tgt_br" 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:32.192 Cannot find device "nvmf_tgt_br2" 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.192 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.193 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.460 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.460 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:32.460 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:32.460 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:32.460 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:32.460 19:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:32.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:09:32.460 00:09:32.460 --- 10.0.0.2 ping statistics --- 00:09:32.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.460 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:32.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:32.460 00:09:32.460 --- 10.0.0.3 ping statistics --- 00:09:32.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.460 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:32.460 00:09:32.460 --- 10.0.0.1 ping statistics --- 00:09:32.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.460 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:09:32.460 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71469 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71469 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71469 ']' 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.461 19:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:32.461 [2024-07-15 19:37:58.189588] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:09:32.461 [2024-07-15 19:37:58.190108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.717 [2024-07-15 19:37:58.326603] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.717 [2024-07-15 19:37:58.433313] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.717 [2024-07-15 19:37:58.433366] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.717 [2024-07-15 19:37:58.433379] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.717 [2024-07-15 19:37:58.433387] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.717 [2024-07-15 19:37:58.433394] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.717 [2024-07-15 19:37:58.433425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:33.651 [2024-07-15 19:37:59.242281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:33.651 [2024-07-15 19:37:59.258354] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:33.651 NULL1 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.651 19:37:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:33.651 [2024-07-15 19:37:59.308970] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:09:33.651 [2024-07-15 19:37:59.309029] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71519 ] 00:09:34.216 Attached to nqn.2016-06.io.spdk:cnode1 00:09:34.216 Namespace ID: 1 size: 1GB 00:09:34.216 fused_ordering(0) 00:09:34.216 fused_ordering(1) 00:09:34.216 fused_ordering(2) 00:09:34.216 fused_ordering(3) 00:09:34.216 fused_ordering(4) 00:09:34.216 fused_ordering(5) 00:09:34.216 fused_ordering(6) 00:09:34.216 fused_ordering(7) 00:09:34.216 fused_ordering(8) 00:09:34.216 fused_ordering(9) 00:09:34.216 fused_ordering(10) 00:09:34.216 fused_ordering(11) 00:09:34.216 fused_ordering(12) 00:09:34.216 fused_ordering(13) 00:09:34.216 fused_ordering(14) 00:09:34.216 fused_ordering(15) 00:09:34.216 fused_ordering(16) 00:09:34.216 fused_ordering(17) 00:09:34.216 fused_ordering(18) 00:09:34.216 fused_ordering(19) 00:09:34.216 fused_ordering(20) 00:09:34.216 fused_ordering(21) 00:09:34.216 fused_ordering(22) 00:09:34.216 fused_ordering(23) 00:09:34.216 fused_ordering(24) 00:09:34.216 fused_ordering(25) 00:09:34.216 fused_ordering(26) 00:09:34.216 fused_ordering(27) 00:09:34.216 fused_ordering(28) 00:09:34.216 fused_ordering(29) 00:09:34.216 fused_ordering(30) 00:09:34.216 fused_ordering(31) 00:09:34.216 fused_ordering(32) 00:09:34.216 fused_ordering(33) 00:09:34.216 fused_ordering(34) 00:09:34.216 fused_ordering(35) 00:09:34.216 fused_ordering(36) 00:09:34.216 fused_ordering(37) 00:09:34.216 fused_ordering(38) 00:09:34.216 fused_ordering(39) 00:09:34.216 fused_ordering(40) 00:09:34.216 fused_ordering(41) 00:09:34.216 fused_ordering(42) 00:09:34.216 fused_ordering(43) 00:09:34.216 fused_ordering(44) 00:09:34.216 fused_ordering(45) 00:09:34.216 fused_ordering(46) 00:09:34.216 fused_ordering(47) 00:09:34.216 fused_ordering(48) 00:09:34.216 fused_ordering(49) 00:09:34.216 fused_ordering(50) 00:09:34.216 fused_ordering(51) 00:09:34.216 fused_ordering(52) 00:09:34.216 fused_ordering(53) 00:09:34.216 fused_ordering(54) 00:09:34.216 fused_ordering(55) 00:09:34.216 fused_ordering(56) 00:09:34.216 fused_ordering(57) 00:09:34.216 fused_ordering(58) 00:09:34.216 fused_ordering(59) 00:09:34.216 fused_ordering(60) 00:09:34.216 fused_ordering(61) 00:09:34.216 fused_ordering(62) 00:09:34.216 fused_ordering(63) 00:09:34.216 fused_ordering(64) 00:09:34.216 fused_ordering(65) 00:09:34.216 fused_ordering(66) 00:09:34.216 fused_ordering(67) 00:09:34.216 fused_ordering(68) 00:09:34.216 fused_ordering(69) 00:09:34.216 fused_ordering(70) 00:09:34.216 fused_ordering(71) 00:09:34.216 fused_ordering(72) 00:09:34.216 fused_ordering(73) 00:09:34.216 fused_ordering(74) 00:09:34.216 fused_ordering(75) 00:09:34.216 fused_ordering(76) 00:09:34.216 fused_ordering(77) 00:09:34.216 fused_ordering(78) 00:09:34.216 fused_ordering(79) 00:09:34.216 fused_ordering(80) 00:09:34.216 fused_ordering(81) 00:09:34.216 fused_ordering(82) 00:09:34.216 fused_ordering(83) 00:09:34.216 fused_ordering(84) 00:09:34.216 fused_ordering(85) 00:09:34.216 fused_ordering(86) 00:09:34.216 fused_ordering(87) 00:09:34.216 fused_ordering(88) 00:09:34.216 fused_ordering(89) 00:09:34.216 fused_ordering(90) 00:09:34.216 fused_ordering(91) 00:09:34.216 fused_ordering(92) 00:09:34.216 fused_ordering(93) 00:09:34.216 fused_ordering(94) 00:09:34.216 fused_ordering(95) 00:09:34.216 fused_ordering(96) 00:09:34.216 fused_ordering(97) 00:09:34.216 fused_ordering(98) 00:09:34.216 fused_ordering(99) 00:09:34.216 fused_ordering(100) 00:09:34.216 fused_ordering(101) 00:09:34.216 fused_ordering(102) 00:09:34.216 fused_ordering(103) 00:09:34.216 fused_ordering(104) 00:09:34.216 fused_ordering(105) 00:09:34.216 fused_ordering(106) 00:09:34.216 fused_ordering(107) 00:09:34.216 fused_ordering(108) 00:09:34.216 fused_ordering(109) 00:09:34.216 fused_ordering(110) 00:09:34.216 fused_ordering(111) 00:09:34.216 fused_ordering(112) 00:09:34.216 fused_ordering(113) 00:09:34.216 fused_ordering(114) 00:09:34.216 fused_ordering(115) 00:09:34.216 fused_ordering(116) 00:09:34.216 fused_ordering(117) 00:09:34.216 fused_ordering(118) 00:09:34.216 fused_ordering(119) 00:09:34.216 fused_ordering(120) 00:09:34.216 fused_ordering(121) 00:09:34.216 fused_ordering(122) 00:09:34.216 fused_ordering(123) 00:09:34.216 fused_ordering(124) 00:09:34.216 fused_ordering(125) 00:09:34.216 fused_ordering(126) 00:09:34.216 fused_ordering(127) 00:09:34.216 fused_ordering(128) 00:09:34.216 fused_ordering(129) 00:09:34.216 fused_ordering(130) 00:09:34.216 fused_ordering(131) 00:09:34.216 fused_ordering(132) 00:09:34.216 fused_ordering(133) 00:09:34.216 fused_ordering(134) 00:09:34.216 fused_ordering(135) 00:09:34.216 fused_ordering(136) 00:09:34.216 fused_ordering(137) 00:09:34.216 fused_ordering(138) 00:09:34.216 fused_ordering(139) 00:09:34.216 fused_ordering(140) 00:09:34.216 fused_ordering(141) 00:09:34.216 fused_ordering(142) 00:09:34.216 fused_ordering(143) 00:09:34.216 fused_ordering(144) 00:09:34.216 fused_ordering(145) 00:09:34.216 fused_ordering(146) 00:09:34.216 fused_ordering(147) 00:09:34.216 fused_ordering(148) 00:09:34.216 fused_ordering(149) 00:09:34.216 fused_ordering(150) 00:09:34.216 fused_ordering(151) 00:09:34.216 fused_ordering(152) 00:09:34.217 fused_ordering(153) 00:09:34.217 fused_ordering(154) 00:09:34.217 fused_ordering(155) 00:09:34.217 fused_ordering(156) 00:09:34.217 fused_ordering(157) 00:09:34.217 fused_ordering(158) 00:09:34.217 fused_ordering(159) 00:09:34.217 fused_ordering(160) 00:09:34.217 fused_ordering(161) 00:09:34.217 fused_ordering(162) 00:09:34.217 fused_ordering(163) 00:09:34.217 fused_ordering(164) 00:09:34.217 fused_ordering(165) 00:09:34.217 fused_ordering(166) 00:09:34.217 fused_ordering(167) 00:09:34.217 fused_ordering(168) 00:09:34.217 fused_ordering(169) 00:09:34.217 fused_ordering(170) 00:09:34.217 fused_ordering(171) 00:09:34.217 fused_ordering(172) 00:09:34.217 fused_ordering(173) 00:09:34.217 fused_ordering(174) 00:09:34.217 fused_ordering(175) 00:09:34.217 fused_ordering(176) 00:09:34.217 fused_ordering(177) 00:09:34.217 fused_ordering(178) 00:09:34.217 fused_ordering(179) 00:09:34.217 fused_ordering(180) 00:09:34.217 fused_ordering(181) 00:09:34.217 fused_ordering(182) 00:09:34.217 fused_ordering(183) 00:09:34.217 fused_ordering(184) 00:09:34.217 fused_ordering(185) 00:09:34.217 fused_ordering(186) 00:09:34.217 fused_ordering(187) 00:09:34.217 fused_ordering(188) 00:09:34.217 fused_ordering(189) 00:09:34.217 fused_ordering(190) 00:09:34.217 fused_ordering(191) 00:09:34.217 fused_ordering(192) 00:09:34.217 fused_ordering(193) 00:09:34.217 fused_ordering(194) 00:09:34.217 fused_ordering(195) 00:09:34.217 fused_ordering(196) 00:09:34.217 fused_ordering(197) 00:09:34.217 fused_ordering(198) 00:09:34.217 fused_ordering(199) 00:09:34.217 fused_ordering(200) 00:09:34.217 fused_ordering(201) 00:09:34.217 fused_ordering(202) 00:09:34.217 fused_ordering(203) 00:09:34.217 fused_ordering(204) 00:09:34.217 fused_ordering(205) 00:09:34.475 fused_ordering(206) 00:09:34.475 fused_ordering(207) 00:09:34.475 fused_ordering(208) 00:09:34.475 fused_ordering(209) 00:09:34.475 fused_ordering(210) 00:09:34.475 fused_ordering(211) 00:09:34.475 fused_ordering(212) 00:09:34.475 fused_ordering(213) 00:09:34.475 fused_ordering(214) 00:09:34.475 fused_ordering(215) 00:09:34.475 fused_ordering(216) 00:09:34.475 fused_ordering(217) 00:09:34.475 fused_ordering(218) 00:09:34.475 fused_ordering(219) 00:09:34.475 fused_ordering(220) 00:09:34.475 fused_ordering(221) 00:09:34.475 fused_ordering(222) 00:09:34.475 fused_ordering(223) 00:09:34.475 fused_ordering(224) 00:09:34.475 fused_ordering(225) 00:09:34.475 fused_ordering(226) 00:09:34.475 fused_ordering(227) 00:09:34.475 fused_ordering(228) 00:09:34.475 fused_ordering(229) 00:09:34.475 fused_ordering(230) 00:09:34.475 fused_ordering(231) 00:09:34.475 fused_ordering(232) 00:09:34.475 fused_ordering(233) 00:09:34.475 fused_ordering(234) 00:09:34.475 fused_ordering(235) 00:09:34.475 fused_ordering(236) 00:09:34.475 fused_ordering(237) 00:09:34.475 fused_ordering(238) 00:09:34.475 fused_ordering(239) 00:09:34.475 fused_ordering(240) 00:09:34.475 fused_ordering(241) 00:09:34.475 fused_ordering(242) 00:09:34.475 fused_ordering(243) 00:09:34.475 fused_ordering(244) 00:09:34.475 fused_ordering(245) 00:09:34.475 fused_ordering(246) 00:09:34.475 fused_ordering(247) 00:09:34.475 fused_ordering(248) 00:09:34.475 fused_ordering(249) 00:09:34.475 fused_ordering(250) 00:09:34.475 fused_ordering(251) 00:09:34.475 fused_ordering(252) 00:09:34.475 fused_ordering(253) 00:09:34.475 fused_ordering(254) 00:09:34.475 fused_ordering(255) 00:09:34.475 fused_ordering(256) 00:09:34.475 fused_ordering(257) 00:09:34.475 fused_ordering(258) 00:09:34.475 fused_ordering(259) 00:09:34.475 fused_ordering(260) 00:09:34.475 fused_ordering(261) 00:09:34.475 fused_ordering(262) 00:09:34.475 fused_ordering(263) 00:09:34.475 fused_ordering(264) 00:09:34.475 fused_ordering(265) 00:09:34.475 fused_ordering(266) 00:09:34.475 fused_ordering(267) 00:09:34.475 fused_ordering(268) 00:09:34.475 fused_ordering(269) 00:09:34.475 fused_ordering(270) 00:09:34.475 fused_ordering(271) 00:09:34.475 fused_ordering(272) 00:09:34.475 fused_ordering(273) 00:09:34.475 fused_ordering(274) 00:09:34.475 fused_ordering(275) 00:09:34.475 fused_ordering(276) 00:09:34.475 fused_ordering(277) 00:09:34.475 fused_ordering(278) 00:09:34.475 fused_ordering(279) 00:09:34.475 fused_ordering(280) 00:09:34.475 fused_ordering(281) 00:09:34.475 fused_ordering(282) 00:09:34.475 fused_ordering(283) 00:09:34.475 fused_ordering(284) 00:09:34.475 fused_ordering(285) 00:09:34.475 fused_ordering(286) 00:09:34.475 fused_ordering(287) 00:09:34.475 fused_ordering(288) 00:09:34.475 fused_ordering(289) 00:09:34.475 fused_ordering(290) 00:09:34.475 fused_ordering(291) 00:09:34.475 fused_ordering(292) 00:09:34.475 fused_ordering(293) 00:09:34.475 fused_ordering(294) 00:09:34.475 fused_ordering(295) 00:09:34.475 fused_ordering(296) 00:09:34.475 fused_ordering(297) 00:09:34.475 fused_ordering(298) 00:09:34.475 fused_ordering(299) 00:09:34.475 fused_ordering(300) 00:09:34.475 fused_ordering(301) 00:09:34.475 fused_ordering(302) 00:09:34.475 fused_ordering(303) 00:09:34.475 fused_ordering(304) 00:09:34.475 fused_ordering(305) 00:09:34.475 fused_ordering(306) 00:09:34.475 fused_ordering(307) 00:09:34.475 fused_ordering(308) 00:09:34.475 fused_ordering(309) 00:09:34.475 fused_ordering(310) 00:09:34.475 fused_ordering(311) 00:09:34.475 fused_ordering(312) 00:09:34.475 fused_ordering(313) 00:09:34.475 fused_ordering(314) 00:09:34.475 fused_ordering(315) 00:09:34.475 fused_ordering(316) 00:09:34.475 fused_ordering(317) 00:09:34.475 fused_ordering(318) 00:09:34.475 fused_ordering(319) 00:09:34.475 fused_ordering(320) 00:09:34.475 fused_ordering(321) 00:09:34.475 fused_ordering(322) 00:09:34.475 fused_ordering(323) 00:09:34.475 fused_ordering(324) 00:09:34.475 fused_ordering(325) 00:09:34.475 fused_ordering(326) 00:09:34.475 fused_ordering(327) 00:09:34.475 fused_ordering(328) 00:09:34.475 fused_ordering(329) 00:09:34.475 fused_ordering(330) 00:09:34.475 fused_ordering(331) 00:09:34.475 fused_ordering(332) 00:09:34.475 fused_ordering(333) 00:09:34.475 fused_ordering(334) 00:09:34.475 fused_ordering(335) 00:09:34.475 fused_ordering(336) 00:09:34.475 fused_ordering(337) 00:09:34.475 fused_ordering(338) 00:09:34.475 fused_ordering(339) 00:09:34.475 fused_ordering(340) 00:09:34.475 fused_ordering(341) 00:09:34.475 fused_ordering(342) 00:09:34.475 fused_ordering(343) 00:09:34.475 fused_ordering(344) 00:09:34.475 fused_ordering(345) 00:09:34.475 fused_ordering(346) 00:09:34.475 fused_ordering(347) 00:09:34.475 fused_ordering(348) 00:09:34.475 fused_ordering(349) 00:09:34.475 fused_ordering(350) 00:09:34.475 fused_ordering(351) 00:09:34.475 fused_ordering(352) 00:09:34.475 fused_ordering(353) 00:09:34.475 fused_ordering(354) 00:09:34.475 fused_ordering(355) 00:09:34.475 fused_ordering(356) 00:09:34.475 fused_ordering(357) 00:09:34.475 fused_ordering(358) 00:09:34.475 fused_ordering(359) 00:09:34.475 fused_ordering(360) 00:09:34.475 fused_ordering(361) 00:09:34.475 fused_ordering(362) 00:09:34.475 fused_ordering(363) 00:09:34.475 fused_ordering(364) 00:09:34.475 fused_ordering(365) 00:09:34.475 fused_ordering(366) 00:09:34.475 fused_ordering(367) 00:09:34.475 fused_ordering(368) 00:09:34.475 fused_ordering(369) 00:09:34.475 fused_ordering(370) 00:09:34.475 fused_ordering(371) 00:09:34.475 fused_ordering(372) 00:09:34.475 fused_ordering(373) 00:09:34.475 fused_ordering(374) 00:09:34.475 fused_ordering(375) 00:09:34.475 fused_ordering(376) 00:09:34.475 fused_ordering(377) 00:09:34.475 fused_ordering(378) 00:09:34.475 fused_ordering(379) 00:09:34.475 fused_ordering(380) 00:09:34.475 fused_ordering(381) 00:09:34.475 fused_ordering(382) 00:09:34.475 fused_ordering(383) 00:09:34.475 fused_ordering(384) 00:09:34.475 fused_ordering(385) 00:09:34.475 fused_ordering(386) 00:09:34.475 fused_ordering(387) 00:09:34.475 fused_ordering(388) 00:09:34.475 fused_ordering(389) 00:09:34.475 fused_ordering(390) 00:09:34.475 fused_ordering(391) 00:09:34.475 fused_ordering(392) 00:09:34.475 fused_ordering(393) 00:09:34.475 fused_ordering(394) 00:09:34.475 fused_ordering(395) 00:09:34.475 fused_ordering(396) 00:09:34.475 fused_ordering(397) 00:09:34.475 fused_ordering(398) 00:09:34.475 fused_ordering(399) 00:09:34.475 fused_ordering(400) 00:09:34.475 fused_ordering(401) 00:09:34.475 fused_ordering(402) 00:09:34.475 fused_ordering(403) 00:09:34.475 fused_ordering(404) 00:09:34.475 fused_ordering(405) 00:09:34.475 fused_ordering(406) 00:09:34.475 fused_ordering(407) 00:09:34.475 fused_ordering(408) 00:09:34.475 fused_ordering(409) 00:09:34.475 fused_ordering(410) 00:09:34.731 fused_ordering(411) 00:09:34.731 fused_ordering(412) 00:09:34.731 fused_ordering(413) 00:09:34.731 fused_ordering(414) 00:09:34.731 fused_ordering(415) 00:09:34.731 fused_ordering(416) 00:09:34.731 fused_ordering(417) 00:09:34.731 fused_ordering(418) 00:09:34.731 fused_ordering(419) 00:09:34.731 fused_ordering(420) 00:09:34.731 fused_ordering(421) 00:09:34.732 fused_ordering(422) 00:09:34.732 fused_ordering(423) 00:09:34.732 fused_ordering(424) 00:09:34.732 fused_ordering(425) 00:09:34.732 fused_ordering(426) 00:09:34.732 fused_ordering(427) 00:09:34.732 fused_ordering(428) 00:09:34.732 fused_ordering(429) 00:09:34.732 fused_ordering(430) 00:09:34.732 fused_ordering(431) 00:09:34.732 fused_ordering(432) 00:09:34.732 fused_ordering(433) 00:09:34.732 fused_ordering(434) 00:09:34.732 fused_ordering(435) 00:09:34.732 fused_ordering(436) 00:09:34.732 fused_ordering(437) 00:09:34.732 fused_ordering(438) 00:09:34.732 fused_ordering(439) 00:09:34.732 fused_ordering(440) 00:09:34.732 fused_ordering(441) 00:09:34.732 fused_ordering(442) 00:09:34.732 fused_ordering(443) 00:09:34.732 fused_ordering(444) 00:09:34.732 fused_ordering(445) 00:09:34.732 fused_ordering(446) 00:09:34.732 fused_ordering(447) 00:09:34.732 fused_ordering(448) 00:09:34.732 fused_ordering(449) 00:09:34.732 fused_ordering(450) 00:09:34.732 fused_ordering(451) 00:09:34.732 fused_ordering(452) 00:09:34.732 fused_ordering(453) 00:09:34.732 fused_ordering(454) 00:09:34.732 fused_ordering(455) 00:09:34.732 fused_ordering(456) 00:09:34.732 fused_ordering(457) 00:09:34.732 fused_ordering(458) 00:09:34.732 fused_ordering(459) 00:09:34.732 fused_ordering(460) 00:09:34.732 fused_ordering(461) 00:09:34.732 fused_ordering(462) 00:09:34.732 fused_ordering(463) 00:09:34.732 fused_ordering(464) 00:09:34.732 fused_ordering(465) 00:09:34.732 fused_ordering(466) 00:09:34.732 fused_ordering(467) 00:09:34.732 fused_ordering(468) 00:09:34.732 fused_ordering(469) 00:09:34.732 fused_ordering(470) 00:09:34.732 fused_ordering(471) 00:09:34.732 fused_ordering(472) 00:09:34.732 fused_ordering(473) 00:09:34.732 fused_ordering(474) 00:09:34.732 fused_ordering(475) 00:09:34.732 fused_ordering(476) 00:09:34.732 fused_ordering(477) 00:09:34.732 fused_ordering(478) 00:09:34.732 fused_ordering(479) 00:09:34.732 fused_ordering(480) 00:09:34.732 fused_ordering(481) 00:09:34.732 fused_ordering(482) 00:09:34.732 fused_ordering(483) 00:09:34.732 fused_ordering(484) 00:09:34.732 fused_ordering(485) 00:09:34.732 fused_ordering(486) 00:09:34.732 fused_ordering(487) 00:09:34.732 fused_ordering(488) 00:09:34.732 fused_ordering(489) 00:09:34.732 fused_ordering(490) 00:09:34.732 fused_ordering(491) 00:09:34.732 fused_ordering(492) 00:09:34.732 fused_ordering(493) 00:09:34.732 fused_ordering(494) 00:09:34.732 fused_ordering(495) 00:09:34.732 fused_ordering(496) 00:09:34.732 fused_ordering(497) 00:09:34.732 fused_ordering(498) 00:09:34.732 fused_ordering(499) 00:09:34.732 fused_ordering(500) 00:09:34.732 fused_ordering(501) 00:09:34.732 fused_ordering(502) 00:09:34.732 fused_ordering(503) 00:09:34.732 fused_ordering(504) 00:09:34.732 fused_ordering(505) 00:09:34.732 fused_ordering(506) 00:09:34.732 fused_ordering(507) 00:09:34.732 fused_ordering(508) 00:09:34.732 fused_ordering(509) 00:09:34.732 fused_ordering(510) 00:09:34.732 fused_ordering(511) 00:09:34.732 fused_ordering(512) 00:09:34.732 fused_ordering(513) 00:09:34.732 fused_ordering(514) 00:09:34.732 fused_ordering(515) 00:09:34.732 fused_ordering(516) 00:09:34.732 fused_ordering(517) 00:09:34.732 fused_ordering(518) 00:09:34.732 fused_ordering(519) 00:09:34.732 fused_ordering(520) 00:09:34.732 fused_ordering(521) 00:09:34.732 fused_ordering(522) 00:09:34.732 fused_ordering(523) 00:09:34.732 fused_ordering(524) 00:09:34.732 fused_ordering(525) 00:09:34.732 fused_ordering(526) 00:09:34.732 fused_ordering(527) 00:09:34.732 fused_ordering(528) 00:09:34.732 fused_ordering(529) 00:09:34.732 fused_ordering(530) 00:09:34.732 fused_ordering(531) 00:09:34.732 fused_ordering(532) 00:09:34.732 fused_ordering(533) 00:09:34.732 fused_ordering(534) 00:09:34.732 fused_ordering(535) 00:09:34.732 fused_ordering(536) 00:09:34.732 fused_ordering(537) 00:09:34.732 fused_ordering(538) 00:09:34.732 fused_ordering(539) 00:09:34.732 fused_ordering(540) 00:09:34.732 fused_ordering(541) 00:09:34.732 fused_ordering(542) 00:09:34.732 fused_ordering(543) 00:09:34.732 fused_ordering(544) 00:09:34.732 fused_ordering(545) 00:09:34.732 fused_ordering(546) 00:09:34.732 fused_ordering(547) 00:09:34.732 fused_ordering(548) 00:09:34.732 fused_ordering(549) 00:09:34.732 fused_ordering(550) 00:09:34.732 fused_ordering(551) 00:09:34.732 fused_ordering(552) 00:09:34.732 fused_ordering(553) 00:09:34.732 fused_ordering(554) 00:09:34.732 fused_ordering(555) 00:09:34.732 fused_ordering(556) 00:09:34.732 fused_ordering(557) 00:09:34.732 fused_ordering(558) 00:09:34.732 fused_ordering(559) 00:09:34.732 fused_ordering(560) 00:09:34.732 fused_ordering(561) 00:09:34.732 fused_ordering(562) 00:09:34.732 fused_ordering(563) 00:09:34.732 fused_ordering(564) 00:09:34.732 fused_ordering(565) 00:09:34.732 fused_ordering(566) 00:09:34.732 fused_ordering(567) 00:09:34.732 fused_ordering(568) 00:09:34.732 fused_ordering(569) 00:09:34.732 fused_ordering(570) 00:09:34.732 fused_ordering(571) 00:09:34.732 fused_ordering(572) 00:09:34.732 fused_ordering(573) 00:09:34.732 fused_ordering(574) 00:09:34.732 fused_ordering(575) 00:09:34.732 fused_ordering(576) 00:09:34.732 fused_ordering(577) 00:09:34.732 fused_ordering(578) 00:09:34.732 fused_ordering(579) 00:09:34.732 fused_ordering(580) 00:09:34.732 fused_ordering(581) 00:09:34.732 fused_ordering(582) 00:09:34.732 fused_ordering(583) 00:09:34.732 fused_ordering(584) 00:09:34.732 fused_ordering(585) 00:09:34.732 fused_ordering(586) 00:09:34.732 fused_ordering(587) 00:09:34.732 fused_ordering(588) 00:09:34.732 fused_ordering(589) 00:09:34.732 fused_ordering(590) 00:09:34.732 fused_ordering(591) 00:09:34.732 fused_ordering(592) 00:09:34.732 fused_ordering(593) 00:09:34.732 fused_ordering(594) 00:09:34.732 fused_ordering(595) 00:09:34.732 fused_ordering(596) 00:09:34.732 fused_ordering(597) 00:09:34.732 fused_ordering(598) 00:09:34.732 fused_ordering(599) 00:09:34.732 fused_ordering(600) 00:09:34.732 fused_ordering(601) 00:09:34.732 fused_ordering(602) 00:09:34.732 fused_ordering(603) 00:09:34.732 fused_ordering(604) 00:09:34.732 fused_ordering(605) 00:09:34.732 fused_ordering(606) 00:09:34.732 fused_ordering(607) 00:09:34.732 fused_ordering(608) 00:09:34.732 fused_ordering(609) 00:09:34.732 fused_ordering(610) 00:09:34.732 fused_ordering(611) 00:09:34.732 fused_ordering(612) 00:09:34.732 fused_ordering(613) 00:09:34.732 fused_ordering(614) 00:09:34.732 fused_ordering(615) 00:09:35.295 fused_ordering(616) 00:09:35.295 fused_ordering(617) 00:09:35.295 fused_ordering(618) 00:09:35.295 fused_ordering(619) 00:09:35.295 fused_ordering(620) 00:09:35.295 fused_ordering(621) 00:09:35.295 fused_ordering(622) 00:09:35.295 fused_ordering(623) 00:09:35.295 fused_ordering(624) 00:09:35.295 fused_ordering(625) 00:09:35.295 fused_ordering(626) 00:09:35.295 fused_ordering(627) 00:09:35.295 fused_ordering(628) 00:09:35.295 fused_ordering(629) 00:09:35.295 fused_ordering(630) 00:09:35.295 fused_ordering(631) 00:09:35.295 fused_ordering(632) 00:09:35.295 fused_ordering(633) 00:09:35.295 fused_ordering(634) 00:09:35.295 fused_ordering(635) 00:09:35.295 fused_ordering(636) 00:09:35.295 fused_ordering(637) 00:09:35.295 fused_ordering(638) 00:09:35.295 fused_ordering(639) 00:09:35.295 fused_ordering(640) 00:09:35.295 fused_ordering(641) 00:09:35.295 fused_ordering(642) 00:09:35.295 fused_ordering(643) 00:09:35.295 fused_ordering(644) 00:09:35.295 fused_ordering(645) 00:09:35.295 fused_ordering(646) 00:09:35.295 fused_ordering(647) 00:09:35.295 fused_ordering(648) 00:09:35.295 fused_ordering(649) 00:09:35.295 fused_ordering(650) 00:09:35.295 fused_ordering(651) 00:09:35.295 fused_ordering(652) 00:09:35.295 fused_ordering(653) 00:09:35.295 fused_ordering(654) 00:09:35.295 fused_ordering(655) 00:09:35.295 fused_ordering(656) 00:09:35.295 fused_ordering(657) 00:09:35.295 fused_ordering(658) 00:09:35.295 fused_ordering(659) 00:09:35.295 fused_ordering(660) 00:09:35.295 fused_ordering(661) 00:09:35.295 fused_ordering(662) 00:09:35.295 fused_ordering(663) 00:09:35.295 fused_ordering(664) 00:09:35.295 fused_ordering(665) 00:09:35.295 fused_ordering(666) 00:09:35.295 fused_ordering(667) 00:09:35.295 fused_ordering(668) 00:09:35.295 fused_ordering(669) 00:09:35.295 fused_ordering(670) 00:09:35.295 fused_ordering(671) 00:09:35.295 fused_ordering(672) 00:09:35.295 fused_ordering(673) 00:09:35.295 fused_ordering(674) 00:09:35.295 fused_ordering(675) 00:09:35.295 fused_ordering(676) 00:09:35.295 fused_ordering(677) 00:09:35.295 fused_ordering(678) 00:09:35.295 fused_ordering(679) 00:09:35.295 fused_ordering(680) 00:09:35.295 fused_ordering(681) 00:09:35.295 fused_ordering(682) 00:09:35.295 fused_ordering(683) 00:09:35.295 fused_ordering(684) 00:09:35.295 fused_ordering(685) 00:09:35.295 fused_ordering(686) 00:09:35.295 fused_ordering(687) 00:09:35.295 fused_ordering(688) 00:09:35.295 fused_ordering(689) 00:09:35.295 fused_ordering(690) 00:09:35.295 fused_ordering(691) 00:09:35.295 fused_ordering(692) 00:09:35.295 fused_ordering(693) 00:09:35.295 fused_ordering(694) 00:09:35.295 fused_ordering(695) 00:09:35.295 fused_ordering(696) 00:09:35.295 fused_ordering(697) 00:09:35.295 fused_ordering(698) 00:09:35.295 fused_ordering(699) 00:09:35.295 fused_ordering(700) 00:09:35.295 fused_ordering(701) 00:09:35.295 fused_ordering(702) 00:09:35.295 fused_ordering(703) 00:09:35.295 fused_ordering(704) 00:09:35.295 fused_ordering(705) 00:09:35.295 fused_ordering(706) 00:09:35.295 fused_ordering(707) 00:09:35.295 fused_ordering(708) 00:09:35.295 fused_ordering(709) 00:09:35.295 fused_ordering(710) 00:09:35.295 fused_ordering(711) 00:09:35.295 fused_ordering(712) 00:09:35.295 fused_ordering(713) 00:09:35.295 fused_ordering(714) 00:09:35.295 fused_ordering(715) 00:09:35.295 fused_ordering(716) 00:09:35.295 fused_ordering(717) 00:09:35.295 fused_ordering(718) 00:09:35.295 fused_ordering(719) 00:09:35.295 fused_ordering(720) 00:09:35.295 fused_ordering(721) 00:09:35.295 fused_ordering(722) 00:09:35.295 fused_ordering(723) 00:09:35.295 fused_ordering(724) 00:09:35.295 fused_ordering(725) 00:09:35.295 fused_ordering(726) 00:09:35.295 fused_ordering(727) 00:09:35.295 fused_ordering(728) 00:09:35.295 fused_ordering(729) 00:09:35.295 fused_ordering(730) 00:09:35.295 fused_ordering(731) 00:09:35.295 fused_ordering(732) 00:09:35.295 fused_ordering(733) 00:09:35.295 fused_ordering(734) 00:09:35.295 fused_ordering(735) 00:09:35.295 fused_ordering(736) 00:09:35.295 fused_ordering(737) 00:09:35.295 fused_ordering(738) 00:09:35.295 fused_ordering(739) 00:09:35.295 fused_ordering(740) 00:09:35.295 fused_ordering(741) 00:09:35.295 fused_ordering(742) 00:09:35.295 fused_ordering(743) 00:09:35.295 fused_ordering(744) 00:09:35.295 fused_ordering(745) 00:09:35.295 fused_ordering(746) 00:09:35.295 fused_ordering(747) 00:09:35.295 fused_ordering(748) 00:09:35.295 fused_ordering(749) 00:09:35.295 fused_ordering(750) 00:09:35.296 fused_ordering(751) 00:09:35.296 fused_ordering(752) 00:09:35.296 fused_ordering(753) 00:09:35.296 fused_ordering(754) 00:09:35.296 fused_ordering(755) 00:09:35.296 fused_ordering(756) 00:09:35.296 fused_ordering(757) 00:09:35.296 fused_ordering(758) 00:09:35.296 fused_ordering(759) 00:09:35.296 fused_ordering(760) 00:09:35.296 fused_ordering(761) 00:09:35.296 fused_ordering(762) 00:09:35.296 fused_ordering(763) 00:09:35.296 fused_ordering(764) 00:09:35.296 fused_ordering(765) 00:09:35.296 fused_ordering(766) 00:09:35.296 fused_ordering(767) 00:09:35.296 fused_ordering(768) 00:09:35.296 fused_ordering(769) 00:09:35.296 fused_ordering(770) 00:09:35.296 fused_ordering(771) 00:09:35.296 fused_ordering(772) 00:09:35.296 fused_ordering(773) 00:09:35.296 fused_ordering(774) 00:09:35.296 fused_ordering(775) 00:09:35.296 fused_ordering(776) 00:09:35.296 fused_ordering(777) 00:09:35.296 fused_ordering(778) 00:09:35.296 fused_ordering(779) 00:09:35.296 fused_ordering(780) 00:09:35.296 fused_ordering(781) 00:09:35.296 fused_ordering(782) 00:09:35.296 fused_ordering(783) 00:09:35.296 fused_ordering(784) 00:09:35.296 fused_ordering(785) 00:09:35.296 fused_ordering(786) 00:09:35.296 fused_ordering(787) 00:09:35.296 fused_ordering(788) 00:09:35.296 fused_ordering(789) 00:09:35.296 fused_ordering(790) 00:09:35.296 fused_ordering(791) 00:09:35.296 fused_ordering(792) 00:09:35.296 fused_ordering(793) 00:09:35.296 fused_ordering(794) 00:09:35.296 fused_ordering(795) 00:09:35.296 fused_ordering(796) 00:09:35.296 fused_ordering(797) 00:09:35.296 fused_ordering(798) 00:09:35.296 fused_ordering(799) 00:09:35.296 fused_ordering(800) 00:09:35.296 fused_ordering(801) 00:09:35.296 fused_ordering(802) 00:09:35.296 fused_ordering(803) 00:09:35.296 fused_ordering(804) 00:09:35.296 fused_ordering(805) 00:09:35.296 fused_ordering(806) 00:09:35.296 fused_ordering(807) 00:09:35.296 fused_ordering(808) 00:09:35.296 fused_ordering(809) 00:09:35.296 fused_ordering(810) 00:09:35.296 fused_ordering(811) 00:09:35.296 fused_ordering(812) 00:09:35.296 fused_ordering(813) 00:09:35.296 fused_ordering(814) 00:09:35.296 fused_ordering(815) 00:09:35.296 fused_ordering(816) 00:09:35.296 fused_ordering(817) 00:09:35.296 fused_ordering(818) 00:09:35.296 fused_ordering(819) 00:09:35.296 fused_ordering(820) 00:09:35.859 fused_ordering(821) 00:09:35.859 fused_ordering(822) 00:09:35.859 fused_ordering(823) 00:09:35.859 fused_ordering(824) 00:09:35.859 fused_ordering(825) 00:09:35.859 fused_ordering(826) 00:09:35.859 fused_ordering(827) 00:09:35.859 fused_ordering(828) 00:09:35.859 fused_ordering(829) 00:09:35.859 fused_ordering(830) 00:09:35.859 fused_ordering(831) 00:09:35.859 fused_ordering(832) 00:09:35.859 fused_ordering(833) 00:09:35.859 fused_ordering(834) 00:09:35.859 fused_ordering(835) 00:09:35.859 fused_ordering(836) 00:09:35.859 fused_ordering(837) 00:09:35.859 fused_ordering(838) 00:09:35.859 fused_ordering(839) 00:09:35.859 fused_ordering(840) 00:09:35.859 fused_ordering(841) 00:09:35.859 fused_ordering(842) 00:09:35.859 fused_ordering(843) 00:09:35.859 fused_ordering(844) 00:09:35.859 fused_ordering(845) 00:09:35.859 fused_ordering(846) 00:09:35.859 fused_ordering(847) 00:09:35.859 fused_ordering(848) 00:09:35.859 fused_ordering(849) 00:09:35.859 fused_ordering(850) 00:09:35.859 fused_ordering(851) 00:09:35.859 fused_ordering(852) 00:09:35.859 fused_ordering(853) 00:09:35.859 fused_ordering(854) 00:09:35.859 fused_ordering(855) 00:09:35.859 fused_ordering(856) 00:09:35.859 fused_ordering(857) 00:09:35.859 fused_ordering(858) 00:09:35.859 fused_ordering(859) 00:09:35.859 fused_ordering(860) 00:09:35.859 fused_ordering(861) 00:09:35.859 fused_ordering(862) 00:09:35.859 fused_ordering(863) 00:09:35.859 fused_ordering(864) 00:09:35.859 fused_ordering(865) 00:09:35.859 fused_ordering(866) 00:09:35.860 fused_ordering(867) 00:09:35.860 fused_ordering(868) 00:09:35.860 fused_ordering(869) 00:09:35.860 fused_ordering(870) 00:09:35.860 fused_ordering(871) 00:09:35.860 fused_ordering(872) 00:09:35.860 fused_ordering(873) 00:09:35.860 fused_ordering(874) 00:09:35.860 fused_ordering(875) 00:09:35.860 fused_ordering(876) 00:09:35.860 fused_ordering(877) 00:09:35.860 fused_ordering(878) 00:09:35.860 fused_ordering(879) 00:09:35.860 fused_ordering(880) 00:09:35.860 fused_ordering(881) 00:09:35.860 fused_ordering(882) 00:09:35.860 fused_ordering(883) 00:09:35.860 fused_ordering(884) 00:09:35.860 fused_ordering(885) 00:09:35.860 fused_ordering(886) 00:09:35.860 fused_ordering(887) 00:09:35.860 fused_ordering(888) 00:09:35.860 fused_ordering(889) 00:09:35.860 fused_ordering(890) 00:09:35.860 fused_ordering(891) 00:09:35.860 fused_ordering(892) 00:09:35.860 fused_ordering(893) 00:09:35.860 fused_ordering(894) 00:09:35.860 fused_ordering(895) 00:09:35.860 fused_ordering(896) 00:09:35.860 fused_ordering(897) 00:09:35.860 fused_ordering(898) 00:09:35.860 fused_ordering(899) 00:09:35.860 fused_ordering(900) 00:09:35.860 fused_ordering(901) 00:09:35.860 fused_ordering(902) 00:09:35.860 fused_ordering(903) 00:09:35.860 fused_ordering(904) 00:09:35.860 fused_ordering(905) 00:09:35.860 fused_ordering(906) 00:09:35.860 fused_ordering(907) 00:09:35.860 fused_ordering(908) 00:09:35.860 fused_ordering(909) 00:09:35.860 fused_ordering(910) 00:09:35.860 fused_ordering(911) 00:09:35.860 fused_ordering(912) 00:09:35.860 fused_ordering(913) 00:09:35.860 fused_ordering(914) 00:09:35.860 fused_ordering(915) 00:09:35.860 fused_ordering(916) 00:09:35.860 fused_ordering(917) 00:09:35.860 fused_ordering(918) 00:09:35.860 fused_ordering(919) 00:09:35.860 fused_ordering(920) 00:09:35.860 fused_ordering(921) 00:09:35.860 fused_ordering(922) 00:09:35.860 fused_ordering(923) 00:09:35.860 fused_ordering(924) 00:09:35.860 fused_ordering(925) 00:09:35.860 fused_ordering(926) 00:09:35.860 fused_ordering(927) 00:09:35.860 fused_ordering(928) 00:09:35.860 fused_ordering(929) 00:09:35.860 fused_ordering(930) 00:09:35.860 fused_ordering(931) 00:09:35.860 fused_ordering(932) 00:09:35.860 fused_ordering(933) 00:09:35.860 fused_ordering(934) 00:09:35.860 fused_ordering(935) 00:09:35.860 fused_ordering(936) 00:09:35.860 fused_ordering(937) 00:09:35.860 fused_ordering(938) 00:09:35.860 fused_ordering(939) 00:09:35.860 fused_ordering(940) 00:09:35.860 fused_ordering(941) 00:09:35.860 fused_ordering(942) 00:09:35.860 fused_ordering(943) 00:09:35.860 fused_ordering(944) 00:09:35.860 fused_ordering(945) 00:09:35.860 fused_ordering(946) 00:09:35.860 fused_ordering(947) 00:09:35.860 fused_ordering(948) 00:09:35.860 fused_ordering(949) 00:09:35.860 fused_ordering(950) 00:09:35.860 fused_ordering(951) 00:09:35.860 fused_ordering(952) 00:09:35.860 fused_ordering(953) 00:09:35.860 fused_ordering(954) 00:09:35.860 fused_ordering(955) 00:09:35.860 fused_ordering(956) 00:09:35.860 fused_ordering(957) 00:09:35.860 fused_ordering(958) 00:09:35.860 fused_ordering(959) 00:09:35.860 fused_ordering(960) 00:09:35.860 fused_ordering(961) 00:09:35.860 fused_ordering(962) 00:09:35.860 fused_ordering(963) 00:09:35.860 fused_ordering(964) 00:09:35.860 fused_ordering(965) 00:09:35.860 fused_ordering(966) 00:09:35.860 fused_ordering(967) 00:09:35.860 fused_ordering(968) 00:09:35.860 fused_ordering(969) 00:09:35.860 fused_ordering(970) 00:09:35.860 fused_ordering(971) 00:09:35.860 fused_ordering(972) 00:09:35.860 fused_ordering(973) 00:09:35.860 fused_ordering(974) 00:09:35.860 fused_ordering(975) 00:09:35.860 fused_ordering(976) 00:09:35.860 fused_ordering(977) 00:09:35.860 fused_ordering(978) 00:09:35.860 fused_ordering(979) 00:09:35.860 fused_ordering(980) 00:09:35.860 fused_ordering(981) 00:09:35.860 fused_ordering(982) 00:09:35.860 fused_ordering(983) 00:09:35.860 fused_ordering(984) 00:09:35.860 fused_ordering(985) 00:09:35.860 fused_ordering(986) 00:09:35.860 fused_ordering(987) 00:09:35.860 fused_ordering(988) 00:09:35.860 fused_ordering(989) 00:09:35.860 fused_ordering(990) 00:09:35.860 fused_ordering(991) 00:09:35.860 fused_ordering(992) 00:09:35.860 fused_ordering(993) 00:09:35.860 fused_ordering(994) 00:09:35.860 fused_ordering(995) 00:09:35.860 fused_ordering(996) 00:09:35.860 fused_ordering(997) 00:09:35.860 fused_ordering(998) 00:09:35.860 fused_ordering(999) 00:09:35.860 fused_ordering(1000) 00:09:35.860 fused_ordering(1001) 00:09:35.860 fused_ordering(1002) 00:09:35.860 fused_ordering(1003) 00:09:35.860 fused_ordering(1004) 00:09:35.860 fused_ordering(1005) 00:09:35.860 fused_ordering(1006) 00:09:35.860 fused_ordering(1007) 00:09:35.860 fused_ordering(1008) 00:09:35.860 fused_ordering(1009) 00:09:35.860 fused_ordering(1010) 00:09:35.860 fused_ordering(1011) 00:09:35.860 fused_ordering(1012) 00:09:35.860 fused_ordering(1013) 00:09:35.860 fused_ordering(1014) 00:09:35.860 fused_ordering(1015) 00:09:35.860 fused_ordering(1016) 00:09:35.860 fused_ordering(1017) 00:09:35.860 fused_ordering(1018) 00:09:35.860 fused_ordering(1019) 00:09:35.860 fused_ordering(1020) 00:09:35.860 fused_ordering(1021) 00:09:35.860 fused_ordering(1022) 00:09:35.860 fused_ordering(1023) 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.860 rmmod nvme_tcp 00:09:35.860 rmmod nvme_fabrics 00:09:35.860 rmmod nvme_keyring 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71469 ']' 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71469 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71469 ']' 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71469 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71469 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:35.860 killing process with pid 71469 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71469' 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71469 00:09:35.860 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71469 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:36.116 00:09:36.116 real 0m4.114s 00:09:36.116 user 0m4.920s 00:09:36.116 sys 0m1.413s 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.116 19:38:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:36.116 ************************************ 00:09:36.116 END TEST nvmf_fused_ordering 00:09:36.116 ************************************ 00:09:36.116 19:38:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.116 19:38:01 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:36.116 19:38:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.116 19:38:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.116 19:38:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.116 ************************************ 00:09:36.116 START TEST nvmf_delete_subsystem 00:09:36.116 ************************************ 00:09:36.116 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:36.374 * Looking for test storage... 00:09:36.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:36.374 Cannot find device "nvmf_tgt_br" 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.374 Cannot find device "nvmf_tgt_br2" 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:36.374 Cannot find device "nvmf_tgt_br" 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:09:36.374 19:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:36.374 Cannot find device "nvmf_tgt_br2" 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.374 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.374 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.375 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:36.375 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:36.633 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:36.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:09:36.634 00:09:36.634 --- 10.0.0.2 ping statistics --- 00:09:36.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.634 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:36.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:09:36.634 00:09:36.634 --- 10.0.0.3 ping statistics --- 00:09:36.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.634 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:36.634 00:09:36.634 --- 10.0.0.1 ping statistics --- 00:09:36.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.634 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71730 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71730 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71730 ']' 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.634 19:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.634 [2024-07-15 19:38:02.347213] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:09:36.634 [2024-07-15 19:38:02.347304] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.890 [2024-07-15 19:38:02.489959] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.890 [2024-07-15 19:38:02.598413] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.890 [2024-07-15 19:38:02.598486] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.890 [2024-07-15 19:38:02.598500] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.890 [2024-07-15 19:38:02.598510] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.890 [2024-07-15 19:38:02.598520] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.890 [2024-07-15 19:38:02.598686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.890 [2024-07-15 19:38:02.598699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.823 [2024-07-15 19:38:03.363628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.823 [2024-07-15 19:38:03.379729] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.823 NULL1 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.823 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.824 Delay0 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71781 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:37.824 19:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:37.824 [2024-07-15 19:38:03.574306] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:39.763 19:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.763 19:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.763 19:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Write completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 starting I/O failed: -6 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.021 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 [2024-07-15 19:38:05.607803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4880 is same with the state(5) to be set 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 [2024-07-15 19:38:05.609651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c1ab0 is same with the state(5) to be set 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 starting I/O failed: -6 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 [2024-07-15 19:38:05.611545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6938000c00 is same with the state(5) to be set 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Write completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:40.022 Read completed with error (sct=0, sc=8) 00:09:41.017 [2024-07-15 19:38:06.587830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2510 is same with the state(5) to be set 00:09:41.017 Read completed with error (sct=0, sc=8) 00:09:41.017 Read completed with error (sct=0, sc=8) 00:09:41.017 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 [2024-07-15 19:38:06.608742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4530 is same with the state(5) to be set 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 [2024-07-15 19:38:06.609211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c4bd0 is same with the state(5) to be set 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 [2024-07-15 19:38:06.611085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f693800d800 is same with the state(5) to be set 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Read completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 Write completed with error (sct=0, sc=8) 00:09:41.018 [2024-07-15 19:38:06.611621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f693800d020 is same with the state(5) to be set 00:09:41.018 Initializing NVMe Controllers 00:09:41.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:41.018 Controller IO queue size 128, less than required. 00:09:41.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:41.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:41.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:41.018 Initialization complete. Launching workers. 00:09:41.018 ======================================================== 00:09:41.018 Latency(us) 00:09:41.018 Device Information : IOPS MiB/s Average min max 00:09:41.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.54 0.08 910657.55 1436.01 1010427.56 00:09:41.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 145.64 0.07 958910.83 789.18 1013827.31 00:09:41.018 ======================================================== 00:09:41.018 Total : 309.18 0.15 933387.79 789.18 1013827.31 00:09:41.018 00:09:41.018 [2024-07-15 19:38:06.613308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2510 (9): Bad file descriptor 00:09:41.018 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:41.018 19:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.018 19:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:41.018 19:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71781 00:09:41.018 19:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71781 00:09:41.584 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71781) - No such process 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71781 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71781 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71781 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.584 [2024-07-15 19:38:07.138743] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71827 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:41.584 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:41.584 [2024-07-15 19:38:07.308096] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:42.150 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.150 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:42.150 19:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.408 19:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.408 19:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:42.408 19:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.974 19:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.974 19:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:42.974 19:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.540 19:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.540 19:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:43.540 19:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.107 19:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.107 19:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:44.107 19:38:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.675 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.675 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:44.675 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.675 Initializing NVMe Controllers 00:09:44.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.675 Controller IO queue size 128, less than required. 00:09:44.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:44.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:44.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:44.675 Initialization complete. Launching workers. 00:09:44.675 ======================================================== 00:09:44.675 Latency(us) 00:09:44.675 Device Information : IOPS MiB/s Average min max 00:09:44.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003401.83 1000149.38 1010201.30 00:09:44.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005544.82 1000586.52 1041838.23 00:09:44.675 ======================================================== 00:09:44.675 Total : 256.00 0.12 1004473.33 1000149.38 1041838.23 00:09:44.675 00:09:44.934 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.934 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71827 00:09:44.934 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71827) - No such process 00:09:44.934 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71827 00:09:44.934 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:44.934 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:44.934 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.934 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.192 rmmod nvme_tcp 00:09:45.192 rmmod nvme_fabrics 00:09:45.192 rmmod nvme_keyring 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71730 ']' 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71730 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71730 ']' 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71730 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71730 00:09:45.192 killing process with pid 71730 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71730' 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71730 00:09:45.192 19:38:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71730 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:45.452 ************************************ 00:09:45.452 END TEST nvmf_delete_subsystem 00:09:45.452 ************************************ 00:09:45.452 00:09:45.452 real 0m9.215s 00:09:45.452 user 0m28.615s 00:09:45.452 sys 0m1.516s 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.452 19:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 19:38:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:45.452 19:38:11 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:45.452 19:38:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:45.452 19:38:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.452 19:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 ************************************ 00:09:45.452 START TEST nvmf_ns_masking 00:09:45.452 ************************************ 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:45.452 * Looking for test storage... 00:09:45.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.452 19:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9479b0b9-ab0c-4a7d-854c-099ce0265b04 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=645120a8-b4de-4ead-82f5-e9bc62780cb2 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2088ebbb-7c9e-49cb-ba5a-b588557a395a 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.453 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.727 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:45.728 Cannot find device "nvmf_tgt_br" 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.728 Cannot find device "nvmf_tgt_br2" 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:45.728 Cannot find device "nvmf_tgt_br" 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:45.728 Cannot find device "nvmf_tgt_br2" 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.728 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:45.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:45.991 00:09:45.991 --- 10.0.0.2 ping statistics --- 00:09:45.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.991 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:45.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:09:45.991 00:09:45.991 --- 10.0.0.3 ping statistics --- 00:09:45.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.991 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:45.991 00:09:45.991 --- 10.0.0.1 ping statistics --- 00:09:45.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.991 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:45.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72071 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72071 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72071 ']' 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.991 19:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:45.991 [2024-07-15 19:38:11.620644] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:09:45.991 [2024-07-15 19:38:11.620749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.991 [2024-07-15 19:38:11.759518] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.250 [2024-07-15 19:38:11.872757] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.250 [2024-07-15 19:38:11.872809] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.250 [2024-07-15 19:38:11.872820] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.250 [2024-07-15 19:38:11.872829] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.250 [2024-07-15 19:38:11.872836] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.250 [2024-07-15 19:38:11.872860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.187 19:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.187 19:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:47.187 19:38:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.187 19:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.187 19:38:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:47.187 19:38:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.187 19:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:47.447 [2024-07-15 19:38:12.990848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.447 19:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:47.447 19:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:47.447 19:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:47.705 Malloc1 00:09:47.705 19:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:47.962 Malloc2 00:09:47.962 19:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.219 19:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:48.477 19:38:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.735 [2024-07-15 19:38:14.329323] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.735 19:38:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:48.735 19:38:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2088ebbb-7c9e-49cb-ba5a-b588557a395a -a 10.0.0.2 -s 4420 -i 4 00:09:48.735 19:38:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.735 19:38:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:48.735 19:38:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.735 19:38:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:48.735 19:38:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:51.263 [ 0]:0x1 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0041b7decd7f47fcb2f19fdce283f7c5 00:09:51.263 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0041b7decd7f47fcb2f19fdce283f7c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:51.264 [ 0]:0x1 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0041b7decd7f47fcb2f19fdce283f7c5 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0041b7decd7f47fcb2f19fdce283f7c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:51.264 [ 1]:0x2 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0ab292d16a24ad5bc193da949f0f256 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0ab292d16a24ad5bc193da949f0f256 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:51.264 19:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.264 19:38:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.830 19:38:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:51.830 19:38:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:51.830 19:38:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2088ebbb-7c9e-49cb-ba5a-b588557a395a -a 10.0.0.2 -s 4420 -i 4 00:09:52.088 19:38:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:52.088 19:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:52.088 19:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.088 19:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:52.088 19:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:52.088 19:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:53.988 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:53.989 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:53.989 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.989 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:53.989 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.989 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:53.989 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:53.989 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:54.247 [ 0]:0x2 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0ab292d16a24ad5bc193da949f0f256 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0ab292d16a24ad5bc193da949f0f256 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.247 19:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:54.506 [ 0]:0x1 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0041b7decd7f47fcb2f19fdce283f7c5 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0041b7decd7f47fcb2f19fdce283f7c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:54.506 [ 1]:0x2 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0ab292d16a24ad5bc193da949f0f256 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0ab292d16a24ad5bc193da949f0f256 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.506 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:54.764 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:54.764 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:54.764 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:54.764 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:54.765 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:55.023 [ 0]:0x2 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0ab292d16a24ad5bc193da949f0f256 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0ab292d16a24ad5bc193da949f0f256 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.023 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:55.282 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:55.282 19:38:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2088ebbb-7c9e-49cb-ba5a-b588557a395a -a 10.0.0.2 -s 4420 -i 4 00:09:55.540 19:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:55.540 19:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.540 19:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.540 19:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:55.540 19:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:55.540 19:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:57.439 [ 0]:0x1 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:57.439 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0041b7decd7f47fcb2f19fdce283f7c5 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0041b7decd7f47fcb2f19fdce283f7c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:57.696 [ 1]:0x2 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0ab292d16a24ad5bc193da949f0f256 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0ab292d16a24ad5bc193da949f0f256 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.696 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:57.953 [ 0]:0x2 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0ab292d16a24ad5bc193da949f0f256 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0ab292d16a24ad5bc193da949f0f256 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:57.953 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:58.211 [2024-07-15 19:38:23.976047] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:58.211 2024/07/15 19:38:23 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:58.211 request: 00:09:58.211 { 00:09:58.211 "method": "nvmf_ns_remove_host", 00:09:58.211 "params": { 00:09:58.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.211 "nsid": 2, 00:09:58.211 "host": "nqn.2016-06.io.spdk:host1" 00:09:58.211 } 00:09:58.211 } 00:09:58.211 Got JSON-RPC error response 00:09:58.211 GoRPCClient: error on JSON-RPC call 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:58.468 19:38:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:58.468 [ 0]:0x2 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0ab292d16a24ad5bc193da949f0f256 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0ab292d16a24ad5bc193da949f0f256 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72448 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72448 /var/tmp/host.sock 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72448 ']' 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.468 19:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:58.468 [2024-07-15 19:38:24.217887] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:09:58.468 [2024-07-15 19:38:24.218016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72448 ] 00:09:58.725 [2024-07-15 19:38:24.363576] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.725 [2024-07-15 19:38:24.474640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.658 19:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.658 19:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:59.658 19:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.934 19:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.934 19:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9479b0b9-ab0c-4a7d-854c-099ce0265b04 00:09:59.934 19:38:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:59.934 19:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9479B0B9AB0C4A7D854C099CE0265B04 -i 00:10:00.216 19:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 645120a8-b4de-4ead-82f5-e9bc62780cb2 00:10:00.216 19:38:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:00.216 19:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 645120A8B4DE4EAD82F5E9BC62780CB2 -i 00:10:00.474 19:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:00.733 19:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:00.991 19:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:00.991 19:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:01.250 nvme0n1 00:10:01.508 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:01.508 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:01.767 nvme1n2 00:10:01.767 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:01.767 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:01.767 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:01.767 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:01.767 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:02.025 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:02.025 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:02.025 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:02.025 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:02.284 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9479b0b9-ab0c-4a7d-854c-099ce0265b04 == \9\4\7\9\b\0\b\9\-\a\b\0\c\-\4\a\7\d\-\8\5\4\c\-\0\9\9\c\e\0\2\6\5\b\0\4 ]] 00:10:02.284 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:02.284 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:02.284 19:38:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 645120a8-b4de-4ead-82f5-e9bc62780cb2 == \6\4\5\1\2\0\a\8\-\b\4\d\e\-\4\e\a\d\-\8\2\f\5\-\e\9\b\c\6\2\7\8\0\c\b\2 ]] 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72448 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72448 ']' 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72448 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72448 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:02.542 killing process with pid 72448 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72448' 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72448 00:10:02.542 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72448 00:10:02.800 19:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.059 19:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:03.059 19:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:03.059 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.059 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.318 rmmod nvme_tcp 00:10:03.318 rmmod nvme_fabrics 00:10:03.318 rmmod nvme_keyring 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72071 ']' 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72071 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72071 ']' 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72071 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72071 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72071' 00:10:03.318 killing process with pid 72071 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72071 00:10:03.318 19:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72071 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.576 19:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.577 19:38:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:03.577 00:10:03.577 real 0m18.147s 00:10:03.577 user 0m28.644s 00:10:03.577 sys 0m2.895s 00:10:03.577 19:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.577 ************************************ 00:10:03.577 END TEST nvmf_ns_masking 00:10:03.577 ************************************ 00:10:03.577 19:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:03.577 19:38:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:03.577 19:38:29 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:10:03.577 19:38:29 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:03.577 19:38:29 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:03.577 19:38:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:03.577 19:38:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.577 19:38:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:03.577 ************************************ 00:10:03.577 START TEST nvmf_host_management 00:10:03.577 ************************************ 00:10:03.577 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:03.836 * Looking for test storage... 00:10:03.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.836 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:03.837 Cannot find device "nvmf_tgt_br" 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.837 Cannot find device "nvmf_tgt_br2" 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:03.837 Cannot find device "nvmf_tgt_br" 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:03.837 Cannot find device "nvmf_tgt_br2" 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.837 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:04.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:04.097 00:10:04.097 --- 10.0.0.2 ping statistics --- 00:10:04.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.097 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:04.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:10:04.097 00:10:04.097 --- 10.0.0.3 ping statistics --- 00:10:04.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.097 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:04.097 00:10:04.097 --- 10.0.0.1 ping statistics --- 00:10:04.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.097 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72812 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72812 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72812 ']' 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.097 19:38:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.097 [2024-07-15 19:38:29.825776] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:10:04.097 [2024-07-15 19:38:29.825885] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.356 [2024-07-15 19:38:29.967660] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.356 [2024-07-15 19:38:30.102984] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.356 [2024-07-15 19:38:30.103335] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.356 [2024-07-15 19:38:30.103508] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.356 [2024-07-15 19:38:30.103652] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.356 [2024-07-15 19:38:30.103704] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.356 [2024-07-15 19:38:30.103963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.356 [2024-07-15 19:38:30.104177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.356 [2024-07-15 19:38:30.104186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.356 [2024-07-15 19:38:30.104047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 [2024-07-15 19:38:30.954621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.368 19:38:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 Malloc0 00:10:05.368 [2024-07-15 19:38:31.031207] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72884 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72884 /var/tmp/bdevperf.sock 00:10:05.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72884 ']' 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:05.368 { 00:10:05.368 "params": { 00:10:05.368 "name": "Nvme$subsystem", 00:10:05.368 "trtype": "$TEST_TRANSPORT", 00:10:05.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.368 "adrfam": "ipv4", 00:10:05.368 "trsvcid": "$NVMF_PORT", 00:10:05.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.368 "hdgst": ${hdgst:-false}, 00:10:05.368 "ddgst": ${ddgst:-false} 00:10:05.368 }, 00:10:05.368 "method": "bdev_nvme_attach_controller" 00:10:05.368 } 00:10:05.368 EOF 00:10:05.368 )") 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:05.368 19:38:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:05.368 "params": { 00:10:05.368 "name": "Nvme0", 00:10:05.368 "trtype": "tcp", 00:10:05.368 "traddr": "10.0.0.2", 00:10:05.368 "adrfam": "ipv4", 00:10:05.368 "trsvcid": "4420", 00:10:05.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:05.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:05.368 "hdgst": false, 00:10:05.368 "ddgst": false 00:10:05.368 }, 00:10:05.368 "method": "bdev_nvme_attach_controller" 00:10:05.368 }' 00:10:05.368 [2024-07-15 19:38:31.142353] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:10:05.368 [2024-07-15 19:38:31.142470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72884 ] 00:10:05.626 [2024-07-15 19:38:31.281375] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.884 [2024-07-15 19:38:31.409236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.884 Running I/O for 10 seconds... 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.451 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.451 [2024-07-15 19:38:32.157553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.451 [2024-07-15 19:38:32.157692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.157995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d4d0 is same with the state(5) to be set 00:10:06.452 [2024-07-15 19:38:32.158395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.452 [2024-07-15 19:38:32.158770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.452 [2024-07-15 19:38:32.158781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.158984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.158993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.453 [2024-07-15 19:38:32.159684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.453 [2024-07-15 19:38:32.159693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.454 [2024-07-15 19:38:32.159704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.454 [2024-07-15 19:38:32.159713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.454 [2024-07-15 19:38:32.159724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.454 [2024-07-15 19:38:32.159733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.454 [2024-07-15 19:38:32.159749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.454 [2024-07-15 19:38:32.159758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.454 [2024-07-15 19:38:32.159770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:06.454 [2024-07-15 19:38:32.159779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:06.454 [2024-07-15 19:38:32.159789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9369c0 is same with the state(5) to be set 00:10:06.454 [2024-07-15 19:38:32.159857] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9369c0 was disconnected and freed. reset controller. 00:10:06.454 [2024-07-15 19:38:32.160979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:06.454 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.454 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:06.454 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.454 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:06.454 task offset: 98304 on job bdev=Nvme0n1 fails 00:10:06.454 00:10:06.454 Latency(us) 00:10:06.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.454 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:06.454 Job: Nvme0n1 ended in about 0.57 seconds with error 00:10:06.454 Verification LBA range: start 0x0 length 0x400 00:10:06.454 Nvme0n1 : 0.57 1359.01 84.94 113.25 0.00 42256.38 5093.93 37891.72 00:10:06.454 =================================================================================================================== 00:10:06.454 Total : 1359.01 84.94 113.25 0.00 42256.38 5093.93 37891.72 00:10:06.454 [2024-07-15 19:38:32.163209] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:06.454 [2024-07-15 19:38:32.163241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x936c90 (9): Bad file descriptor 00:10:06.454 19:38:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.454 19:38:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:06.454 [2024-07-15 19:38:32.173653] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72884 00:10:07.821 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72884) - No such process 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:07.821 { 00:10:07.821 "params": { 00:10:07.821 "name": "Nvme$subsystem", 00:10:07.821 "trtype": "$TEST_TRANSPORT", 00:10:07.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.821 "adrfam": "ipv4", 00:10:07.821 "trsvcid": "$NVMF_PORT", 00:10:07.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.821 "hdgst": ${hdgst:-false}, 00:10:07.821 "ddgst": ${ddgst:-false} 00:10:07.821 }, 00:10:07.821 "method": "bdev_nvme_attach_controller" 00:10:07.821 } 00:10:07.821 EOF 00:10:07.821 )") 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:07.821 19:38:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:07.821 "params": { 00:10:07.821 "name": "Nvme0", 00:10:07.821 "trtype": "tcp", 00:10:07.821 "traddr": "10.0.0.2", 00:10:07.821 "adrfam": "ipv4", 00:10:07.821 "trsvcid": "4420", 00:10:07.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:07.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:07.821 "hdgst": false, 00:10:07.821 "ddgst": false 00:10:07.821 }, 00:10:07.821 "method": "bdev_nvme_attach_controller" 00:10:07.821 }' 00:10:07.821 [2024-07-15 19:38:33.245543] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:10:07.821 [2024-07-15 19:38:33.245651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72934 ] 00:10:07.821 [2024-07-15 19:38:33.382335] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.822 [2024-07-15 19:38:33.495252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.078 Running I/O for 1 seconds... 00:10:09.009 00:10:09.009 Latency(us) 00:10:09.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.009 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:09.009 Verification LBA range: start 0x0 length 0x400 00:10:09.009 Nvme0n1 : 1.04 1544.81 96.55 0.00 0.00 40566.42 5749.29 39798.23 00:10:09.009 =================================================================================================================== 00:10:09.009 Total : 1544.81 96.55 0.00 0.00 40566.42 5749.29 39798.23 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.266 19:38:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.266 rmmod nvme_tcp 00:10:09.266 rmmod nvme_fabrics 00:10:09.266 rmmod nvme_keyring 00:10:09.266 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72812 ']' 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72812 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72812 ']' 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72812 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72812 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:09.524 killing process with pid 72812 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72812' 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72812 00:10:09.524 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72812 00:10:09.782 [2024-07-15 19:38:35.306651] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:09.782 00:10:09.782 real 0m6.063s 00:10:09.782 user 0m23.676s 00:10:09.782 sys 0m1.402s 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.782 19:38:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.782 ************************************ 00:10:09.782 END TEST nvmf_host_management 00:10:09.782 ************************************ 00:10:09.782 19:38:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:09.782 19:38:35 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:09.782 19:38:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.782 19:38:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.782 19:38:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.782 ************************************ 00:10:09.782 START TEST nvmf_lvol 00:10:09.782 ************************************ 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:09.782 * Looking for test storage... 00:10:09.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.782 19:38:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:09.783 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:09.783 Cannot find device "nvmf_tgt_br" 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.040 Cannot find device "nvmf_tgt_br2" 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:10.040 Cannot find device "nvmf_tgt_br" 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:10.040 Cannot find device "nvmf_tgt_br2" 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.040 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.041 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:10.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:10:10.298 00:10:10.298 --- 10.0.0.2 ping statistics --- 00:10:10.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.298 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:10.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:10.298 00:10:10.298 --- 10.0.0.3 ping statistics --- 00:10:10.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.298 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:10.298 00:10:10.298 --- 10.0.0.1 ping statistics --- 00:10:10.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.298 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73150 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73150 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73150 ']' 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.298 19:38:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:10.298 [2024-07-15 19:38:35.927247] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:10:10.298 [2024-07-15 19:38:35.927343] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.298 [2024-07-15 19:38:36.066043] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.556 [2024-07-15 19:38:36.197265] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.556 [2024-07-15 19:38:36.197601] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.556 [2024-07-15 19:38:36.197724] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.556 [2024-07-15 19:38:36.197836] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.556 [2024-07-15 19:38:36.197926] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.556 [2024-07-15 19:38:36.198189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.556 [2024-07-15 19:38:36.198266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.556 [2024-07-15 19:38:36.198272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.488 19:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.488 19:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:11.488 19:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.488 19:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.488 19:38:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:11.488 19:38:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.488 19:38:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.488 [2024-07-15 19:38:37.260221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.746 19:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.002 19:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:12.002 19:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:12.260 19:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:12.260 19:38:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:12.517 19:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:12.777 19:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=91767876-8ef7-4ea8-aedf-be0debc62b71 00:10:12.777 19:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 91767876-8ef7-4ea8-aedf-be0debc62b71 lvol 20 00:10:13.037 19:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6e896ba8-2c15-4568-bf1f-cb56b58f3194 00:10:13.037 19:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:13.300 19:38:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e896ba8-2c15-4568-bf1f-cb56b58f3194 00:10:13.559 19:38:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:13.817 [2024-07-15 19:38:39.400089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.817 19:38:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.076 19:38:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73292 00:10:14.076 19:38:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:14.076 19:38:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:15.008 19:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6e896ba8-2c15-4568-bf1f-cb56b58f3194 MY_SNAPSHOT 00:10:15.265 19:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d63367f4-1b5f-470d-b7b8-73b7dfb15ac8 00:10:15.265 19:38:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6e896ba8-2c15-4568-bf1f-cb56b58f3194 30 00:10:15.523 19:38:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d63367f4-1b5f-470d-b7b8-73b7dfb15ac8 MY_CLONE 00:10:15.780 19:38:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f78e78fc-2ee6-4938-afe7-90305d42c265 00:10:15.780 19:38:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f78e78fc-2ee6-4938-afe7-90305d42c265 00:10:16.714 19:38:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73292 00:10:24.829 Initializing NVMe Controllers 00:10:24.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:24.829 Controller IO queue size 128, less than required. 00:10:24.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:24.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:24.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:24.829 Initialization complete. Launching workers. 00:10:24.829 ======================================================== 00:10:24.829 Latency(us) 00:10:24.829 Device Information : IOPS MiB/s Average min max 00:10:24.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10527.30 41.12 12159.01 2610.86 78387.15 00:10:24.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10413.70 40.68 12294.96 1590.90 60076.30 00:10:24.829 ======================================================== 00:10:24.829 Total : 20941.00 81.80 12226.61 1590.90 78387.15 00:10:24.829 00:10:24.829 19:38:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:24.829 19:38:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6e896ba8-2c15-4568-bf1f-cb56b58f3194 00:10:25.104 19:38:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91767876-8ef7-4ea8-aedf-be0debc62b71 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.364 19:38:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.364 rmmod nvme_tcp 00:10:25.364 rmmod nvme_fabrics 00:10:25.364 rmmod nvme_keyring 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73150 ']' 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73150 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73150 ']' 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73150 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73150 00:10:25.364 killing process with pid 73150 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73150' 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73150 00:10:25.364 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73150 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:25.624 ************************************ 00:10:25.624 END TEST nvmf_lvol 00:10:25.624 ************************************ 00:10:25.624 00:10:25.624 real 0m15.931s 00:10:25.624 user 1m6.336s 00:10:25.624 sys 0m4.088s 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.624 19:38:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:25.624 19:38:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:25.624 19:38:51 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:25.624 19:38:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.624 19:38:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.624 19:38:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.883 ************************************ 00:10:25.883 START TEST nvmf_lvs_grow 00:10:25.883 ************************************ 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:25.883 * Looking for test storage... 00:10:25.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.883 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:25.884 Cannot find device "nvmf_tgt_br" 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.884 Cannot find device "nvmf_tgt_br2" 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:25.884 Cannot find device "nvmf_tgt_br" 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:25.884 Cannot find device "nvmf_tgt_br2" 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.884 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:26.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:26.142 00:10:26.142 --- 10.0.0.2 ping statistics --- 00:10:26.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.142 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:26.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:26.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:10:26.142 00:10:26.142 --- 10.0.0.3 ping statistics --- 00:10:26.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.142 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:26.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:26.142 00:10:26.142 --- 10.0.0.1 ping statistics --- 00:10:26.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.142 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73663 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73663 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73663 ']' 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.142 19:38:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.142 [2024-07-15 19:38:51.922825] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:10:26.142 [2024-07-15 19:38:51.922943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.401 [2024-07-15 19:38:52.064259] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.401 [2024-07-15 19:38:52.179885] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.401 [2024-07-15 19:38:52.179946] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.401 [2024-07-15 19:38:52.179958] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.401 [2024-07-15 19:38:52.179967] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.401 [2024-07-15 19:38:52.179974] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.401 [2024-07-15 19:38:52.180004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.336 19:38:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.336 19:38:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:27.336 19:38:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.336 19:38:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.336 19:38:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:27.336 19:38:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.336 19:38:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.595 [2024-07-15 19:38:53.182663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:27.595 ************************************ 00:10:27.595 START TEST lvs_grow_clean 00:10:27.595 ************************************ 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:27.595 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:27.853 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:27.853 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:28.111 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:28.111 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:28.111 19:38:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:28.369 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:28.369 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:28.369 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f lvol 150 00:10:28.627 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5947eba4-9fe5-4428-ae93-4a066d43176b 00:10:28.627 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:28.628 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:28.886 [2024-07-15 19:38:54.574160] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:28.886 [2024-07-15 19:38:54.574276] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:28.886 true 00:10:28.886 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:28.886 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:29.144 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:29.144 19:38:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:29.402 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5947eba4-9fe5-4428-ae93-4a066d43176b 00:10:29.661 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:29.919 [2024-07-15 19:38:55.510761] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.919 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73826 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73826 /var/tmp/bdevperf.sock 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73826 ']' 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.177 19:38:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:30.177 [2024-07-15 19:38:55.810583] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:10:30.177 [2024-07-15 19:38:55.810691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73826 ] 00:10:30.177 [2024-07-15 19:38:55.947176] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.434 [2024-07-15 19:38:56.081936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.000 19:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.000 19:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:31.000 19:38:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:31.566 Nvme0n1 00:10:31.566 19:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:31.566 [ 00:10:31.566 { 00:10:31.566 "aliases": [ 00:10:31.566 "5947eba4-9fe5-4428-ae93-4a066d43176b" 00:10:31.566 ], 00:10:31.566 "assigned_rate_limits": { 00:10:31.566 "r_mbytes_per_sec": 0, 00:10:31.566 "rw_ios_per_sec": 0, 00:10:31.566 "rw_mbytes_per_sec": 0, 00:10:31.566 "w_mbytes_per_sec": 0 00:10:31.566 }, 00:10:31.566 "block_size": 4096, 00:10:31.566 "claimed": false, 00:10:31.566 "driver_specific": { 00:10:31.566 "mp_policy": "active_passive", 00:10:31.566 "nvme": [ 00:10:31.566 { 00:10:31.566 "ctrlr_data": { 00:10:31.566 "ana_reporting": false, 00:10:31.566 "cntlid": 1, 00:10:31.566 "firmware_revision": "24.09", 00:10:31.566 "model_number": "SPDK bdev Controller", 00:10:31.566 "multi_ctrlr": true, 00:10:31.566 "oacs": { 00:10:31.566 "firmware": 0, 00:10:31.566 "format": 0, 00:10:31.566 "ns_manage": 0, 00:10:31.566 "security": 0 00:10:31.566 }, 00:10:31.566 "serial_number": "SPDK0", 00:10:31.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:31.567 "vendor_id": "0x8086" 00:10:31.567 }, 00:10:31.567 "ns_data": { 00:10:31.567 "can_share": true, 00:10:31.567 "id": 1 00:10:31.567 }, 00:10:31.567 "trid": { 00:10:31.567 "adrfam": "IPv4", 00:10:31.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:31.567 "traddr": "10.0.0.2", 00:10:31.567 "trsvcid": "4420", 00:10:31.567 "trtype": "TCP" 00:10:31.567 }, 00:10:31.567 "vs": { 00:10:31.567 "nvme_version": "1.3" 00:10:31.567 } 00:10:31.567 } 00:10:31.567 ] 00:10:31.567 }, 00:10:31.567 "memory_domains": [ 00:10:31.567 { 00:10:31.567 "dma_device_id": "system", 00:10:31.567 "dma_device_type": 1 00:10:31.567 } 00:10:31.567 ], 00:10:31.567 "name": "Nvme0n1", 00:10:31.567 "num_blocks": 38912, 00:10:31.567 "product_name": "NVMe disk", 00:10:31.567 "supported_io_types": { 00:10:31.567 "abort": true, 00:10:31.567 "compare": true, 00:10:31.567 "compare_and_write": true, 00:10:31.567 "copy": true, 00:10:31.567 "flush": true, 00:10:31.567 "get_zone_info": false, 00:10:31.567 "nvme_admin": true, 00:10:31.567 "nvme_io": true, 00:10:31.567 "nvme_io_md": false, 00:10:31.567 "nvme_iov_md": false, 00:10:31.567 "read": true, 00:10:31.567 "reset": true, 00:10:31.567 "seek_data": false, 00:10:31.567 "seek_hole": false, 00:10:31.567 "unmap": true, 00:10:31.567 "write": true, 00:10:31.567 "write_zeroes": true, 00:10:31.567 "zcopy": false, 00:10:31.567 "zone_append": false, 00:10:31.567 "zone_management": false 00:10:31.567 }, 00:10:31.567 "uuid": "5947eba4-9fe5-4428-ae93-4a066d43176b", 00:10:31.567 "zoned": false 00:10:31.567 } 00:10:31.567 ] 00:10:31.827 19:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73879 00:10:31.827 19:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:31.827 19:38:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:31.827 Running I/O for 10 seconds... 00:10:32.772 Latency(us) 00:10:32.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.772 Nvme0n1 : 1.00 8232.00 32.16 0.00 0.00 0.00 0.00 0.00 00:10:32.772 =================================================================================================================== 00:10:32.772 Total : 8232.00 32.16 0.00 0.00 0.00 0.00 0.00 00:10:32.772 00:10:33.703 19:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:33.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.703 Nvme0n1 : 2.00 8350.00 32.62 0.00 0.00 0.00 0.00 0.00 00:10:33.703 =================================================================================================================== 00:10:33.703 Total : 8350.00 32.62 0.00 0.00 0.00 0.00 0.00 00:10:33.703 00:10:33.961 true 00:10:33.961 19:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:33.961 19:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:34.528 19:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:34.528 19:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:34.528 19:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73879 00:10:34.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.786 Nvme0n1 : 3.00 8275.00 32.32 0.00 0.00 0.00 0.00 0.00 00:10:34.786 =================================================================================================================== 00:10:34.786 Total : 8275.00 32.32 0.00 0.00 0.00 0.00 0.00 00:10:34.786 00:10:35.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.719 Nvme0n1 : 4.00 8259.00 32.26 0.00 0.00 0.00 0.00 0.00 00:10:35.719 =================================================================================================================== 00:10:35.719 Total : 8259.00 32.26 0.00 0.00 0.00 0.00 0.00 00:10:35.719 00:10:37.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.090 Nvme0n1 : 5.00 8257.00 32.25 0.00 0.00 0.00 0.00 0.00 00:10:37.090 =================================================================================================================== 00:10:37.090 Total : 8257.00 32.25 0.00 0.00 0.00 0.00 0.00 00:10:37.090 00:10:38.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.019 Nvme0n1 : 6.00 8257.67 32.26 0.00 0.00 0.00 0.00 0.00 00:10:38.019 =================================================================================================================== 00:10:38.019 Total : 8257.67 32.26 0.00 0.00 0.00 0.00 0.00 00:10:38.019 00:10:38.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.950 Nvme0n1 : 7.00 8237.43 32.18 0.00 0.00 0.00 0.00 0.00 00:10:38.950 =================================================================================================================== 00:10:38.950 Total : 8237.43 32.18 0.00 0.00 0.00 0.00 0.00 00:10:38.950 00:10:39.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.894 Nvme0n1 : 8.00 8222.12 32.12 0.00 0.00 0.00 0.00 0.00 00:10:39.894 =================================================================================================================== 00:10:39.894 Total : 8222.12 32.12 0.00 0.00 0.00 0.00 0.00 00:10:39.894 00:10:40.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.825 Nvme0n1 : 9.00 8206.00 32.05 0.00 0.00 0.00 0.00 0.00 00:10:40.825 =================================================================================================================== 00:10:40.825 Total : 8206.00 32.05 0.00 0.00 0.00 0.00 0.00 00:10:40.825 00:10:41.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.757 Nvme0n1 : 10.00 8188.30 31.99 0.00 0.00 0.00 0.00 0.00 00:10:41.757 =================================================================================================================== 00:10:41.757 Total : 8188.30 31.99 0.00 0.00 0.00 0.00 0.00 00:10:41.757 00:10:41.757 00:10:41.757 Latency(us) 00:10:41.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.757 Nvme0n1 : 10.00 8198.69 32.03 0.00 0.00 15607.75 7536.64 34078.72 00:10:41.757 =================================================================================================================== 00:10:41.757 Total : 8198.69 32.03 0.00 0.00 15607.75 7536.64 34078.72 00:10:41.757 0 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73826 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73826 ']' 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73826 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73826 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:41.757 killing process with pid 73826 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73826' 00:10:41.757 Received shutdown signal, test time was about 10.000000 seconds 00:10:41.757 00:10:41.757 Latency(us) 00:10:41.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.757 =================================================================================================================== 00:10:41.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73826 00:10:41.757 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73826 00:10:42.014 19:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:42.357 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:42.615 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:42.615 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:42.873 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:42.873 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:42.873 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.130 [2024-07-15 19:39:08.867868] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:43.131 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:43.131 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:43.131 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:43.131 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.131 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.131 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.389 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.389 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.389 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.389 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.389 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:43.389 19:39:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:43.389 2024/07/15 19:39:09 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:09a94afe-c2c2-4b0e-addd-a35a18a6d39f], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:43.389 request: 00:10:43.389 { 00:10:43.389 "method": "bdev_lvol_get_lvstores", 00:10:43.389 "params": { 00:10:43.389 "uuid": "09a94afe-c2c2-4b0e-addd-a35a18a6d39f" 00:10:43.389 } 00:10:43.389 } 00:10:43.389 Got JSON-RPC error response 00:10:43.389 GoRPCClient: error on JSON-RPC call 00:10:43.389 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:43.389 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:43.389 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:43.389 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:43.389 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:43.648 aio_bdev 00:10:43.648 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5947eba4-9fe5-4428-ae93-4a066d43176b 00:10:43.648 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5947eba4-9fe5-4428-ae93-4a066d43176b 00:10:43.648 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:43.648 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:43.648 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:43.648 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:43.648 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:43.907 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5947eba4-9fe5-4428-ae93-4a066d43176b -t 2000 00:10:44.166 [ 00:10:44.166 { 00:10:44.166 "aliases": [ 00:10:44.166 "lvs/lvol" 00:10:44.166 ], 00:10:44.166 "assigned_rate_limits": { 00:10:44.166 "r_mbytes_per_sec": 0, 00:10:44.166 "rw_ios_per_sec": 0, 00:10:44.166 "rw_mbytes_per_sec": 0, 00:10:44.166 "w_mbytes_per_sec": 0 00:10:44.166 }, 00:10:44.166 "block_size": 4096, 00:10:44.166 "claimed": false, 00:10:44.166 "driver_specific": { 00:10:44.166 "lvol": { 00:10:44.166 "base_bdev": "aio_bdev", 00:10:44.166 "clone": false, 00:10:44.166 "esnap_clone": false, 00:10:44.166 "lvol_store_uuid": "09a94afe-c2c2-4b0e-addd-a35a18a6d39f", 00:10:44.166 "num_allocated_clusters": 38, 00:10:44.166 "snapshot": false, 00:10:44.166 "thin_provision": false 00:10:44.166 } 00:10:44.166 }, 00:10:44.166 "name": "5947eba4-9fe5-4428-ae93-4a066d43176b", 00:10:44.166 "num_blocks": 38912, 00:10:44.166 "product_name": "Logical Volume", 00:10:44.166 "supported_io_types": { 00:10:44.166 "abort": false, 00:10:44.166 "compare": false, 00:10:44.166 "compare_and_write": false, 00:10:44.166 "copy": false, 00:10:44.166 "flush": false, 00:10:44.166 "get_zone_info": false, 00:10:44.166 "nvme_admin": false, 00:10:44.166 "nvme_io": false, 00:10:44.166 "nvme_io_md": false, 00:10:44.166 "nvme_iov_md": false, 00:10:44.166 "read": true, 00:10:44.166 "reset": true, 00:10:44.166 "seek_data": true, 00:10:44.166 "seek_hole": true, 00:10:44.166 "unmap": true, 00:10:44.166 "write": true, 00:10:44.166 "write_zeroes": true, 00:10:44.166 "zcopy": false, 00:10:44.166 "zone_append": false, 00:10:44.166 "zone_management": false 00:10:44.166 }, 00:10:44.166 "uuid": "5947eba4-9fe5-4428-ae93-4a066d43176b", 00:10:44.166 "zoned": false 00:10:44.166 } 00:10:44.166 ] 00:10:44.166 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:44.166 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:44.166 19:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:44.423 19:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:44.424 19:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:44.424 19:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:44.988 19:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:44.988 19:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5947eba4-9fe5-4428-ae93-4a066d43176b 00:10:44.988 19:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09a94afe-c2c2-4b0e-addd-a35a18a6d39f 00:10:45.246 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:45.509 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:46.092 ************************************ 00:10:46.092 END TEST lvs_grow_clean 00:10:46.092 ************************************ 00:10:46.092 00:10:46.092 real 0m18.394s 00:10:46.092 user 0m17.740s 00:10:46.092 sys 0m2.206s 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:46.092 ************************************ 00:10:46.092 START TEST lvs_grow_dirty 00:10:46.092 ************************************ 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:46.092 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:46.350 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:46.350 19:39:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:46.607 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:10:46.607 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:10:46.607 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:46.865 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:46.865 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:46.865 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e lvol 150 00:10:47.123 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=212edcc2-ec5a-4f23-961d-e540e0c1a7f7 00:10:47.123 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:47.123 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:47.381 [2024-07-15 19:39:12.909024] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:47.381 [2024-07-15 19:39:12.909131] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:47.381 true 00:10:47.381 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:10:47.381 19:39:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:47.638 19:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:47.638 19:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:47.896 19:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 212edcc2-ec5a-4f23-961d-e540e0c1a7f7 00:10:48.153 19:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:48.153 [2024-07-15 19:39:13.933600] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.411 19:39:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74275 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74275 /var/tmp/bdevperf.sock 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74275 ']' 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.411 19:39:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:48.668 [2024-07-15 19:39:14.227634] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:10:48.668 [2024-07-15 19:39:14.227738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74275 ] 00:10:48.668 [2024-07-15 19:39:14.363817] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.926 [2024-07-15 19:39:14.479756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.492 19:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.492 19:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:49.492 19:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:50.097 Nvme0n1 00:10:50.097 19:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:50.097 [ 00:10:50.097 { 00:10:50.097 "aliases": [ 00:10:50.097 "212edcc2-ec5a-4f23-961d-e540e0c1a7f7" 00:10:50.097 ], 00:10:50.097 "assigned_rate_limits": { 00:10:50.097 "r_mbytes_per_sec": 0, 00:10:50.097 "rw_ios_per_sec": 0, 00:10:50.097 "rw_mbytes_per_sec": 0, 00:10:50.097 "w_mbytes_per_sec": 0 00:10:50.097 }, 00:10:50.097 "block_size": 4096, 00:10:50.097 "claimed": false, 00:10:50.097 "driver_specific": { 00:10:50.097 "mp_policy": "active_passive", 00:10:50.097 "nvme": [ 00:10:50.097 { 00:10:50.097 "ctrlr_data": { 00:10:50.097 "ana_reporting": false, 00:10:50.097 "cntlid": 1, 00:10:50.097 "firmware_revision": "24.09", 00:10:50.097 "model_number": "SPDK bdev Controller", 00:10:50.097 "multi_ctrlr": true, 00:10:50.097 "oacs": { 00:10:50.097 "firmware": 0, 00:10:50.097 "format": 0, 00:10:50.097 "ns_manage": 0, 00:10:50.097 "security": 0 00:10:50.097 }, 00:10:50.097 "serial_number": "SPDK0", 00:10:50.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:50.097 "vendor_id": "0x8086" 00:10:50.097 }, 00:10:50.097 "ns_data": { 00:10:50.098 "can_share": true, 00:10:50.098 "id": 1 00:10:50.098 }, 00:10:50.098 "trid": { 00:10:50.098 "adrfam": "IPv4", 00:10:50.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:50.098 "traddr": "10.0.0.2", 00:10:50.098 "trsvcid": "4420", 00:10:50.098 "trtype": "TCP" 00:10:50.098 }, 00:10:50.098 "vs": { 00:10:50.098 "nvme_version": "1.3" 00:10:50.098 } 00:10:50.098 } 00:10:50.098 ] 00:10:50.098 }, 00:10:50.098 "memory_domains": [ 00:10:50.098 { 00:10:50.098 "dma_device_id": "system", 00:10:50.098 "dma_device_type": 1 00:10:50.098 } 00:10:50.098 ], 00:10:50.098 "name": "Nvme0n1", 00:10:50.098 "num_blocks": 38912, 00:10:50.098 "product_name": "NVMe disk", 00:10:50.098 "supported_io_types": { 00:10:50.098 "abort": true, 00:10:50.098 "compare": true, 00:10:50.098 "compare_and_write": true, 00:10:50.098 "copy": true, 00:10:50.098 "flush": true, 00:10:50.098 "get_zone_info": false, 00:10:50.098 "nvme_admin": true, 00:10:50.098 "nvme_io": true, 00:10:50.098 "nvme_io_md": false, 00:10:50.098 "nvme_iov_md": false, 00:10:50.098 "read": true, 00:10:50.098 "reset": true, 00:10:50.098 "seek_data": false, 00:10:50.098 "seek_hole": false, 00:10:50.098 "unmap": true, 00:10:50.098 "write": true, 00:10:50.098 "write_zeroes": true, 00:10:50.098 "zcopy": false, 00:10:50.098 "zone_append": false, 00:10:50.098 "zone_management": false 00:10:50.098 }, 00:10:50.098 "uuid": "212edcc2-ec5a-4f23-961d-e540e0c1a7f7", 00:10:50.098 "zoned": false 00:10:50.098 } 00:10:50.098 ] 00:10:50.098 19:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74323 00:10:50.098 19:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:50.098 19:39:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:50.356 Running I/O for 10 seconds... 00:10:51.290 Latency(us) 00:10:51.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.290 Nvme0n1 : 1.00 8686.00 33.93 0.00 0.00 0.00 0.00 0.00 00:10:51.290 =================================================================================================================== 00:10:51.290 Total : 8686.00 33.93 0.00 0.00 0.00 0.00 0.00 00:10:51.290 00:10:52.227 19:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:10:52.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.227 Nvme0n1 : 2.00 8587.50 33.54 0.00 0.00 0.00 0.00 0.00 00:10:52.227 =================================================================================================================== 00:10:52.227 Total : 8587.50 33.54 0.00 0.00 0.00 0.00 0.00 00:10:52.227 00:10:52.485 true 00:10:52.485 19:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:10:52.485 19:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:52.744 19:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:52.744 19:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:52.744 19:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74323 00:10:53.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.312 Nvme0n1 : 3.00 8579.67 33.51 0.00 0.00 0.00 0.00 0.00 00:10:53.312 =================================================================================================================== 00:10:53.312 Total : 8579.67 33.51 0.00 0.00 0.00 0.00 0.00 00:10:53.312 00:10:54.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.259 Nvme0n1 : 4.00 8547.50 33.39 0.00 0.00 0.00 0.00 0.00 00:10:54.259 =================================================================================================================== 00:10:54.259 Total : 8547.50 33.39 0.00 0.00 0.00 0.00 0.00 00:10:54.259 00:10:55.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.636 Nvme0n1 : 5.00 8518.40 33.27 0.00 0.00 0.00 0.00 0.00 00:10:55.636 =================================================================================================================== 00:10:55.636 Total : 8518.40 33.27 0.00 0.00 0.00 0.00 0.00 00:10:55.636 00:10:56.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.199 Nvme0n1 : 6.00 8352.67 32.63 0.00 0.00 0.00 0.00 0.00 00:10:56.199 =================================================================================================================== 00:10:56.199 Total : 8352.67 32.63 0.00 0.00 0.00 0.00 0.00 00:10:56.199 00:10:57.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.574 Nvme0n1 : 7.00 8294.43 32.40 0.00 0.00 0.00 0.00 0.00 00:10:57.574 =================================================================================================================== 00:10:57.574 Total : 8294.43 32.40 0.00 0.00 0.00 0.00 0.00 00:10:57.574 00:10:58.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.507 Nvme0n1 : 8.00 8256.38 32.25 0.00 0.00 0.00 0.00 0.00 00:10:58.507 =================================================================================================================== 00:10:58.507 Total : 8256.38 32.25 0.00 0.00 0.00 0.00 0.00 00:10:58.507 00:10:59.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.444 Nvme0n1 : 9.00 8210.00 32.07 0.00 0.00 0.00 0.00 0.00 00:10:59.444 =================================================================================================================== 00:10:59.444 Total : 8210.00 32.07 0.00 0.00 0.00 0.00 0.00 00:10:59.444 00:11:00.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.380 Nvme0n1 : 10.00 8176.40 31.94 0.00 0.00 0.00 0.00 0.00 00:11:00.380 =================================================================================================================== 00:11:00.380 Total : 8176.40 31.94 0.00 0.00 0.00 0.00 0.00 00:11:00.380 00:11:00.380 00:11:00.380 Latency(us) 00:11:00.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.380 Nvme0n1 : 10.01 8178.79 31.95 0.00 0.00 15644.64 6464.23 134408.38 00:11:00.380 =================================================================================================================== 00:11:00.380 Total : 8178.79 31.95 0.00 0.00 15644.64 6464.23 134408.38 00:11:00.380 0 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74275 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74275 ']' 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74275 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74275 00:11:00.380 killing process with pid 74275 00:11:00.380 Received shutdown signal, test time was about 10.000000 seconds 00:11:00.380 00:11:00.380 Latency(us) 00:11:00.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.380 =================================================================================================================== 00:11:00.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74275' 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74275 00:11:00.380 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74275 00:11:00.639 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:00.898 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:01.157 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:01.157 19:39:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73663 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73663 00:11:01.431 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73663 Killed "${NVMF_APP[@]}" "$@" 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74486 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74486 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74486 ']' 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.431 19:39:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.431 [2024-07-15 19:39:27.089295] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:01.431 [2024-07-15 19:39:27.089381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.689 [2024-07-15 19:39:27.226764] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.689 [2024-07-15 19:39:27.339698] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.689 [2024-07-15 19:39:27.339788] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.689 [2024-07-15 19:39:27.339815] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.689 [2024-07-15 19:39:27.339823] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.689 [2024-07-15 19:39:27.339831] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.689 [2024-07-15 19:39:27.339854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:02.626 [2024-07-15 19:39:28.347635] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:02.626 [2024-07-15 19:39:28.347896] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:02.626 [2024-07-15 19:39:28.348128] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 212edcc2-ec5a-4f23-961d-e540e0c1a7f7 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=212edcc2-ec5a-4f23-961d-e540e0c1a7f7 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:02.626 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:03.192 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 212edcc2-ec5a-4f23-961d-e540e0c1a7f7 -t 2000 00:11:03.192 [ 00:11:03.192 { 00:11:03.192 "aliases": [ 00:11:03.192 "lvs/lvol" 00:11:03.192 ], 00:11:03.192 "assigned_rate_limits": { 00:11:03.192 "r_mbytes_per_sec": 0, 00:11:03.192 "rw_ios_per_sec": 0, 00:11:03.192 "rw_mbytes_per_sec": 0, 00:11:03.192 "w_mbytes_per_sec": 0 00:11:03.192 }, 00:11:03.192 "block_size": 4096, 00:11:03.192 "claimed": false, 00:11:03.192 "driver_specific": { 00:11:03.192 "lvol": { 00:11:03.192 "base_bdev": "aio_bdev", 00:11:03.192 "clone": false, 00:11:03.192 "esnap_clone": false, 00:11:03.192 "lvol_store_uuid": "90f6fd09-3d78-41da-8a8f-a19f56888a7e", 00:11:03.192 "num_allocated_clusters": 38, 00:11:03.192 "snapshot": false, 00:11:03.192 "thin_provision": false 00:11:03.192 } 00:11:03.192 }, 00:11:03.192 "name": "212edcc2-ec5a-4f23-961d-e540e0c1a7f7", 00:11:03.192 "num_blocks": 38912, 00:11:03.192 "product_name": "Logical Volume", 00:11:03.192 "supported_io_types": { 00:11:03.192 "abort": false, 00:11:03.192 "compare": false, 00:11:03.192 "compare_and_write": false, 00:11:03.192 "copy": false, 00:11:03.192 "flush": false, 00:11:03.192 "get_zone_info": false, 00:11:03.192 "nvme_admin": false, 00:11:03.192 "nvme_io": false, 00:11:03.192 "nvme_io_md": false, 00:11:03.192 "nvme_iov_md": false, 00:11:03.192 "read": true, 00:11:03.192 "reset": true, 00:11:03.192 "seek_data": true, 00:11:03.192 "seek_hole": true, 00:11:03.192 "unmap": true, 00:11:03.192 "write": true, 00:11:03.192 "write_zeroes": true, 00:11:03.192 "zcopy": false, 00:11:03.192 "zone_append": false, 00:11:03.192 "zone_management": false 00:11:03.192 }, 00:11:03.192 "uuid": "212edcc2-ec5a-4f23-961d-e540e0c1a7f7", 00:11:03.192 "zoned": false 00:11:03.192 } 00:11:03.192 ] 00:11:03.192 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:03.192 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:03.192 19:39:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:03.452 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:03.452 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:03.452 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:03.710 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:03.710 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:03.968 [2024-07-15 19:39:29.608912] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.968 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.969 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:03.969 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:04.227 2024/07/15 19:39:29 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:90f6fd09-3d78-41da-8a8f-a19f56888a7e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:11:04.227 request: 00:11:04.227 { 00:11:04.227 "method": "bdev_lvol_get_lvstores", 00:11:04.227 "params": { 00:11:04.227 "uuid": "90f6fd09-3d78-41da-8a8f-a19f56888a7e" 00:11:04.227 } 00:11:04.227 } 00:11:04.227 Got JSON-RPC error response 00:11:04.227 GoRPCClient: error on JSON-RPC call 00:11:04.227 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:04.227 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:04.227 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:04.227 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:04.227 19:39:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.485 aio_bdev 00:11:04.485 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 212edcc2-ec5a-4f23-961d-e540e0c1a7f7 00:11:04.485 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=212edcc2-ec5a-4f23-961d-e540e0c1a7f7 00:11:04.485 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:04.485 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:04.485 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:04.485 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:04.485 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:04.743 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 212edcc2-ec5a-4f23-961d-e540e0c1a7f7 -t 2000 00:11:05.001 [ 00:11:05.001 { 00:11:05.001 "aliases": [ 00:11:05.001 "lvs/lvol" 00:11:05.001 ], 00:11:05.001 "assigned_rate_limits": { 00:11:05.001 "r_mbytes_per_sec": 0, 00:11:05.001 "rw_ios_per_sec": 0, 00:11:05.001 "rw_mbytes_per_sec": 0, 00:11:05.001 "w_mbytes_per_sec": 0 00:11:05.001 }, 00:11:05.001 "block_size": 4096, 00:11:05.001 "claimed": false, 00:11:05.001 "driver_specific": { 00:11:05.001 "lvol": { 00:11:05.001 "base_bdev": "aio_bdev", 00:11:05.001 "clone": false, 00:11:05.001 "esnap_clone": false, 00:11:05.001 "lvol_store_uuid": "90f6fd09-3d78-41da-8a8f-a19f56888a7e", 00:11:05.001 "num_allocated_clusters": 38, 00:11:05.001 "snapshot": false, 00:11:05.001 "thin_provision": false 00:11:05.001 } 00:11:05.001 }, 00:11:05.001 "name": "212edcc2-ec5a-4f23-961d-e540e0c1a7f7", 00:11:05.001 "num_blocks": 38912, 00:11:05.001 "product_name": "Logical Volume", 00:11:05.001 "supported_io_types": { 00:11:05.001 "abort": false, 00:11:05.001 "compare": false, 00:11:05.001 "compare_and_write": false, 00:11:05.001 "copy": false, 00:11:05.001 "flush": false, 00:11:05.001 "get_zone_info": false, 00:11:05.001 "nvme_admin": false, 00:11:05.001 "nvme_io": false, 00:11:05.001 "nvme_io_md": false, 00:11:05.001 "nvme_iov_md": false, 00:11:05.001 "read": true, 00:11:05.001 "reset": true, 00:11:05.001 "seek_data": true, 00:11:05.001 "seek_hole": true, 00:11:05.001 "unmap": true, 00:11:05.001 "write": true, 00:11:05.001 "write_zeroes": true, 00:11:05.001 "zcopy": false, 00:11:05.001 "zone_append": false, 00:11:05.001 "zone_management": false 00:11:05.001 }, 00:11:05.001 "uuid": "212edcc2-ec5a-4f23-961d-e540e0c1a7f7", 00:11:05.001 "zoned": false 00:11:05.001 } 00:11:05.001 ] 00:11:05.001 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:05.001 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:05.001 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:05.263 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:05.263 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:05.263 19:39:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:05.522 19:39:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:05.522 19:39:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 212edcc2-ec5a-4f23-961d-e540e0c1a7f7 00:11:05.780 19:39:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90f6fd09-3d78-41da-8a8f-a19f56888a7e 00:11:06.038 19:39:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:06.297 19:39:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:06.556 00:11:06.556 real 0m20.611s 00:11:06.556 user 0m43.251s 00:11:06.556 sys 0m7.988s 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:06.556 ************************************ 00:11:06.556 END TEST lvs_grow_dirty 00:11:06.556 ************************************ 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:06.556 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:06.556 nvmf_trace.0 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.815 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.815 rmmod nvme_tcp 00:11:06.815 rmmod nvme_fabrics 00:11:06.815 rmmod nvme_keyring 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74486 ']' 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74486 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74486 ']' 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74486 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74486 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.074 killing process with pid 74486 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74486' 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74486 00:11:07.074 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74486 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:07.333 00:11:07.333 real 0m41.503s 00:11:07.333 user 1m7.380s 00:11:07.333 sys 0m10.924s 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.333 19:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:07.333 ************************************ 00:11:07.333 END TEST nvmf_lvs_grow 00:11:07.333 ************************************ 00:11:07.333 19:39:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:07.333 19:39:32 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:07.333 19:39:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.333 19:39:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.333 19:39:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.333 ************************************ 00:11:07.333 START TEST nvmf_bdev_io_wait 00:11:07.333 ************************************ 00:11:07.333 19:39:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:07.333 * Looking for test storage... 00:11:07.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.333 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:07.334 Cannot find device "nvmf_tgt_br" 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:11:07.334 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.593 Cannot find device "nvmf_tgt_br2" 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:07.593 Cannot find device "nvmf_tgt_br" 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:07.593 Cannot find device "nvmf_tgt_br2" 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.593 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:07.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:11:07.852 00:11:07.852 --- 10.0.0.2 ping statistics --- 00:11:07.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.852 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:07.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:11:07.852 00:11:07.852 --- 10.0.0.3 ping statistics --- 00:11:07.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.852 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:07.852 00:11:07.852 --- 10.0.0.1 ping statistics --- 00:11:07.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.852 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74899 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74899 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74899 ']' 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.852 19:39:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.852 [2024-07-15 19:39:33.495215] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:07.852 [2024-07-15 19:39:33.495380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.110 [2024-07-15 19:39:33.638835] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.110 [2024-07-15 19:39:33.772411] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.110 [2024-07-15 19:39:33.772481] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.110 [2024-07-15 19:39:33.772496] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.110 [2024-07-15 19:39:33.772506] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.110 [2024-07-15 19:39:33.772515] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.110 [2024-07-15 19:39:33.772898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.110 [2024-07-15 19:39:33.773065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.110 [2024-07-15 19:39:33.773197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.110 [2024-07-15 19:39:33.773199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 [2024-07-15 19:39:34.598648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 Malloc0 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 [2024-07-15 19:39:34.658315] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74953 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74955 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.046 { 00:11:09.046 "params": { 00:11:09.046 "name": "Nvme$subsystem", 00:11:09.046 "trtype": "$TEST_TRANSPORT", 00:11:09.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.046 "adrfam": "ipv4", 00:11:09.046 "trsvcid": "$NVMF_PORT", 00:11:09.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.046 "hdgst": ${hdgst:-false}, 00:11:09.046 "ddgst": ${ddgst:-false} 00:11:09.046 }, 00:11:09.046 "method": "bdev_nvme_attach_controller" 00:11:09.046 } 00:11:09.046 EOF 00:11:09.046 )") 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.046 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.046 { 00:11:09.046 "params": { 00:11:09.046 "name": "Nvme$subsystem", 00:11:09.046 "trtype": "$TEST_TRANSPORT", 00:11:09.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.046 "adrfam": "ipv4", 00:11:09.046 "trsvcid": "$NVMF_PORT", 00:11:09.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.047 "hdgst": ${hdgst:-false}, 00:11:09.047 "ddgst": ${ddgst:-false} 00:11:09.047 }, 00:11:09.047 "method": "bdev_nvme_attach_controller" 00:11:09.047 } 00:11:09.047 EOF 00:11:09.047 )") 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74957 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74962 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.047 { 00:11:09.047 "params": { 00:11:09.047 "name": "Nvme$subsystem", 00:11:09.047 "trtype": "$TEST_TRANSPORT", 00:11:09.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.047 "adrfam": "ipv4", 00:11:09.047 "trsvcid": "$NVMF_PORT", 00:11:09.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.047 "hdgst": ${hdgst:-false}, 00:11:09.047 "ddgst": ${ddgst:-false} 00:11:09.047 }, 00:11:09.047 "method": "bdev_nvme_attach_controller" 00:11:09.047 } 00:11:09.047 EOF 00:11:09.047 )") 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.047 { 00:11:09.047 "params": { 00:11:09.047 "name": "Nvme$subsystem", 00:11:09.047 "trtype": "$TEST_TRANSPORT", 00:11:09.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.047 "adrfam": "ipv4", 00:11:09.047 "trsvcid": "$NVMF_PORT", 00:11:09.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.047 "hdgst": ${hdgst:-false}, 00:11:09.047 "ddgst": ${ddgst:-false} 00:11:09.047 }, 00:11:09.047 "method": "bdev_nvme_attach_controller" 00:11:09.047 } 00:11:09.047 EOF 00:11:09.047 )") 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.047 "params": { 00:11:09.047 "name": "Nvme1", 00:11:09.047 "trtype": "tcp", 00:11:09.047 "traddr": "10.0.0.2", 00:11:09.047 "adrfam": "ipv4", 00:11:09.047 "trsvcid": "4420", 00:11:09.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.047 "hdgst": false, 00:11:09.047 "ddgst": false 00:11:09.047 }, 00:11:09.047 "method": "bdev_nvme_attach_controller" 00:11:09.047 }' 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.047 "params": { 00:11:09.047 "name": "Nvme1", 00:11:09.047 "trtype": "tcp", 00:11:09.047 "traddr": "10.0.0.2", 00:11:09.047 "adrfam": "ipv4", 00:11:09.047 "trsvcid": "4420", 00:11:09.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.047 "hdgst": false, 00:11:09.047 "ddgst": false 00:11:09.047 }, 00:11:09.047 "method": "bdev_nvme_attach_controller" 00:11:09.047 }' 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.047 "params": { 00:11:09.047 "name": "Nvme1", 00:11:09.047 "trtype": "tcp", 00:11:09.047 "traddr": "10.0.0.2", 00:11:09.047 "adrfam": "ipv4", 00:11:09.047 "trsvcid": "4420", 00:11:09.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.047 "hdgst": false, 00:11:09.047 "ddgst": false 00:11:09.047 }, 00:11:09.047 "method": "bdev_nvme_attach_controller" 00:11:09.047 }' 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.047 "params": { 00:11:09.047 "name": "Nvme1", 00:11:09.047 "trtype": "tcp", 00:11:09.047 "traddr": "10.0.0.2", 00:11:09.047 "adrfam": "ipv4", 00:11:09.047 "trsvcid": "4420", 00:11:09.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.047 "hdgst": false, 00:11:09.047 "ddgst": false 00:11:09.047 }, 00:11:09.047 "method": "bdev_nvme_attach_controller" 00:11:09.047 }' 00:11:09.047 19:39:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74953 00:11:09.047 [2024-07-15 19:39:34.723359] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:09.047 [2024-07-15 19:39:34.723459] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:09.047 [2024-07-15 19:39:34.726896] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:09.047 [2024-07-15 19:39:34.726974] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:09.047 [2024-07-15 19:39:34.761469] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:09.047 [2024-07-15 19:39:34.761575] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:09.047 [2024-07-15 19:39:34.780973] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:09.047 [2024-07-15 19:39:34.781111] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:09.305 [2024-07-15 19:39:34.937481] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.305 [2024-07-15 19:39:35.010982] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.305 [2024-07-15 19:39:35.059384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:09.562 [2024-07-15 19:39:35.096172] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.562 [2024-07-15 19:39:35.113535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:09.562 [2024-07-15 19:39:35.163674] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.562 [2024-07-15 19:39:35.197061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:09.562 Running I/O for 1 seconds... 00:11:09.562 Running I/O for 1 seconds... 00:11:09.562 [2024-07-15 19:39:35.265484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:09.819 Running I/O for 1 seconds... 00:11:09.819 Running I/O for 1 seconds... 00:11:10.771 00:11:10.771 Latency(us) 00:11:10.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.771 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:10.771 Nvme1n1 : 1.00 193770.24 756.92 0.00 0.00 657.86 279.27 1042.62 00:11:10.771 =================================================================================================================== 00:11:10.771 Total : 193770.24 756.92 0.00 0.00 657.86 279.27 1042.62 00:11:10.771 00:11:10.771 Latency(us) 00:11:10.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.771 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:10.771 Nvme1n1 : 1.01 9746.30 38.07 0.00 0.00 13067.35 8698.41 18707.55 00:11:10.771 =================================================================================================================== 00:11:10.771 Total : 9746.30 38.07 0.00 0.00 13067.35 8698.41 18707.55 00:11:10.771 00:11:10.771 Latency(us) 00:11:10.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.771 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:10.771 Nvme1n1 : 1.01 8773.52 34.27 0.00 0.00 14531.34 7000.44 24427.05 00:11:10.771 =================================================================================================================== 00:11:10.771 Total : 8773.52 34.27 0.00 0.00 14531.34 7000.44 24427.05 00:11:10.771 00:11:10.771 Latency(us) 00:11:10.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.771 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:10.771 Nvme1n1 : 1.01 8399.16 32.81 0.00 0.00 15174.71 7685.59 28120.90 00:11:10.771 =================================================================================================================== 00:11:10.771 Total : 8399.16 32.81 0.00 0.00 15174.71 7685.59 28120.90 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74955 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74957 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74962 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.039 rmmod nvme_tcp 00:11:11.039 rmmod nvme_fabrics 00:11:11.039 rmmod nvme_keyring 00:11:11.039 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74899 ']' 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74899 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74899 ']' 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74899 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74899 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:11.297 killing process with pid 74899 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74899' 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74899 00:11:11.297 19:39:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74899 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.297 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.556 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:11.556 00:11:11.556 real 0m4.144s 00:11:11.556 user 0m18.045s 00:11:11.556 sys 0m2.255s 00:11:11.556 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.556 ************************************ 00:11:11.556 END TEST nvmf_bdev_io_wait 00:11:11.556 ************************************ 00:11:11.556 19:39:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:11.556 19:39:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:11.556 19:39:37 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:11.556 19:39:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:11.556 19:39:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.556 19:39:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:11.556 ************************************ 00:11:11.556 START TEST nvmf_queue_depth 00:11:11.556 ************************************ 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:11.556 * Looking for test storage... 00:11:11.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:11.556 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:11.557 Cannot find device "nvmf_tgt_br" 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.557 Cannot find device "nvmf_tgt_br2" 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:11.557 Cannot find device "nvmf_tgt_br" 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:11.557 Cannot find device "nvmf_tgt_br2" 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:11:11.557 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:11.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:11.814 00:11:11.814 --- 10.0.0.2 ping statistics --- 00:11:11.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.814 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:11.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:11.814 00:11:11.814 --- 10.0.0.3 ping statistics --- 00:11:11.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.814 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:11:11.814 00:11:11.814 --- 10.0.0.1 ping statistics --- 00:11:11.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.814 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:11.814 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75192 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75192 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75192 ']' 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.071 19:39:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:12.071 [2024-07-15 19:39:37.650380] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:12.071 [2024-07-15 19:39:37.650523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.071 [2024-07-15 19:39:37.788935] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.328 [2024-07-15 19:39:37.919286] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.328 [2024-07-15 19:39:37.919382] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.328 [2024-07-15 19:39:37.919418] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.328 [2024-07-15 19:39:37.919436] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.328 [2024-07-15 19:39:37.919451] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.328 [2024-07-15 19:39:37.919498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.892 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.892 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:12.892 19:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.892 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.892 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.149 [2024-07-15 19:39:38.704857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.149 Malloc0 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.149 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.150 [2024-07-15 19:39:38.763528] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75242 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75242 /var/tmp/bdevperf.sock 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75242 ']' 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.150 19:39:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:13.150 [2024-07-15 19:39:38.826081] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:13.150 [2024-07-15 19:39:38.826199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75242 ] 00:11:13.408 [2024-07-15 19:39:38.968923] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.408 [2024-07-15 19:39:39.103595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.341 19:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.341 19:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:14.341 19:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:14.341 19:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.341 19:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:14.341 NVMe0n1 00:11:14.341 19:39:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.341 19:39:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:14.341 Running I/O for 10 seconds... 00:11:24.344 00:11:24.344 Latency(us) 00:11:24.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.344 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:24.344 Verification LBA range: start 0x0 length 0x4000 00:11:24.344 NVMe0n1 : 10.08 9341.39 36.49 0.00 0.00 109160.46 27644.28 74830.20 00:11:24.344 =================================================================================================================== 00:11:24.344 Total : 9341.39 36.49 0.00 0.00 109160.46 27644.28 74830.20 00:11:24.344 0 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75242 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75242 ']' 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75242 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75242 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:24.602 killing process with pid 75242 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75242' 00:11:24.602 Received shutdown signal, test time was about 10.000000 seconds 00:11:24.602 00:11:24.602 Latency(us) 00:11:24.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.602 =================================================================================================================== 00:11:24.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75242 00:11:24.602 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75242 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.860 rmmod nvme_tcp 00:11:24.860 rmmod nvme_fabrics 00:11:24.860 rmmod nvme_keyring 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75192 ']' 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75192 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75192 ']' 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75192 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75192 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:24.860 killing process with pid 75192 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75192' 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75192 00:11:24.860 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75192 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:25.118 00:11:25.118 real 0m13.626s 00:11:25.118 user 0m23.632s 00:11:25.118 sys 0m2.080s 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.118 ************************************ 00:11:25.118 END TEST nvmf_queue_depth 00:11:25.118 ************************************ 00:11:25.118 19:39:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:25.118 19:39:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:25.118 19:39:50 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:25.118 19:39:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:25.118 19:39:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.118 19:39:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.118 ************************************ 00:11:25.118 START TEST nvmf_target_multipath 00:11:25.118 ************************************ 00:11:25.118 19:39:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:25.377 * Looking for test storage... 00:11:25.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.377 19:39:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:25.378 Cannot find device "nvmf_tgt_br" 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:25.378 Cannot find device "nvmf_tgt_br2" 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:25.378 19:39:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:25.378 Cannot find device "nvmf_tgt_br" 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:25.378 Cannot find device "nvmf_tgt_br2" 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:25.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:25.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:25.378 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:25.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:11:25.636 00:11:25.636 --- 10.0.0.2 ping statistics --- 00:11:25.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.636 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:25.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:25.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:25.636 00:11:25.636 --- 10.0.0.3 ping statistics --- 00:11:25.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.636 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:25.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:25.636 00:11:25.636 --- 10.0.0.1 ping statistics --- 00:11:25.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.636 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75579 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75579 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75579 ']' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.636 19:39:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:25.636 [2024-07-15 19:39:51.390927] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:25.636 [2024-07-15 19:39:51.391043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.894 [2024-07-15 19:39:51.534779] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.894 [2024-07-15 19:39:51.658829] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.894 [2024-07-15 19:39:51.658906] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.894 [2024-07-15 19:39:51.658920] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.894 [2024-07-15 19:39:51.658930] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.894 [2024-07-15 19:39:51.658939] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.894 [2024-07-15 19:39:51.659135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.894 [2024-07-15 19:39:51.659916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.894 [2024-07-15 19:39:51.660001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.894 [2024-07-15 19:39:51.660018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.828 19:39:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.828 19:39:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:11:26.828 19:39:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.828 19:39:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.828 19:39:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:26.828 19:39:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.828 19:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:27.087 [2024-07-15 19:39:52.660099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.087 19:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:27.346 Malloc0 00:11:27.346 19:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:27.607 19:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.868 19:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.868 [2024-07-15 19:39:53.622261] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.868 19:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:28.126 [2024-07-15 19:39:53.850446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:28.126 19:39:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:28.384 19:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:28.641 19:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.641 19:39:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.641 19:39:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.641 19:39:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:28.641 19:39:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75722 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:30.545 19:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:30.804 [global] 00:11:30.804 thread=1 00:11:30.804 invalidate=1 00:11:30.804 rw=randrw 00:11:30.804 time_based=1 00:11:30.804 runtime=6 00:11:30.804 ioengine=libaio 00:11:30.804 direct=1 00:11:30.804 bs=4096 00:11:30.804 iodepth=128 00:11:30.804 norandommap=0 00:11:30.804 numjobs=1 00:11:30.804 00:11:30.804 verify_dump=1 00:11:30.804 verify_backlog=512 00:11:30.804 verify_state_save=0 00:11:30.804 do_verify=1 00:11:30.804 verify=crc32c-intel 00:11:30.804 [job0] 00:11:30.804 filename=/dev/nvme0n1 00:11:30.804 Could not set queue depth (nvme0n1) 00:11:30.804 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.804 fio-3.35 00:11:30.804 Starting 1 thread 00:11:31.739 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:31.996 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:32.254 19:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:33.187 19:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:33.187 19:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:33.187 19:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:33.187 19:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:33.445 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:33.703 19:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:34.637 19:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:34.637 19:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:34.637 19:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:34.637 19:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75722 00:11:37.166 00:11:37.166 job0: (groupid=0, jobs=1): err= 0: pid=75743: Mon Jul 15 19:40:02 2024 00:11:37.166 read: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(246MiB/6006msec) 00:11:37.166 slat (usec): min=4, max=5902, avg=54.21, stdev=243.91 00:11:37.166 clat (usec): min=1039, max=17931, avg=8243.38, stdev=1306.62 00:11:37.166 lat (usec): min=1056, max=17944, avg=8297.58, stdev=1316.97 00:11:37.166 clat percentiles (usec): 00:11:37.166 | 1.00th=[ 4948], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 7439], 00:11:37.166 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8356], 00:11:37.166 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10552], 00:11:37.166 | 99.00th=[12518], 99.50th=[13042], 99.90th=[15401], 99.95th=[16319], 00:11:37.166 | 99.99th=[16909] 00:11:37.166 bw ( KiB/s): min= 8880, max=28328, per=53.00%, avg=22259.64, stdev=5897.76, samples=11 00:11:37.166 iops : min= 2220, max= 7082, avg=5564.91, stdev=1474.44, samples=11 00:11:37.166 write: IOPS=6348, BW=24.8MiB/s (26.0MB/s)(133MiB/5362msec); 0 zone resets 00:11:37.166 slat (usec): min=12, max=2289, avg=66.34, stdev=171.89 00:11:37.166 clat (usec): min=525, max=17779, avg=7105.48, stdev=1056.26 00:11:37.166 lat (usec): min=566, max=17818, avg=7171.82, stdev=1061.35 00:11:37.166 clat percentiles (usec): 00:11:37.166 | 1.00th=[ 4047], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6521], 00:11:37.166 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:11:37.166 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8455], 00:11:37.166 | 99.00th=[10552], 99.50th=[11207], 99.90th=[13173], 99.95th=[13698], 00:11:37.166 | 99.99th=[14615] 00:11:37.166 bw ( KiB/s): min= 9328, max=27632, per=87.79%, avg=22293.09, stdev=5574.79, samples=11 00:11:37.166 iops : min= 2332, max= 6908, avg=5573.27, stdev=1393.70, samples=11 00:11:37.166 lat (usec) : 750=0.01%, 1000=0.01% 00:11:37.166 lat (msec) : 2=0.03%, 4=0.44%, 10=94.21%, 20=5.32% 00:11:37.166 cpu : usr=4.85%, sys=22.41%, ctx=6244, majf=0, minf=108 00:11:37.166 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:37.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:37.166 issued rwts: total=63059,34039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:37.166 00:11:37.166 Run status group 0 (all jobs): 00:11:37.166 READ: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=246MiB (258MB), run=6006-6006msec 00:11:37.166 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=133MiB (139MB), run=5362-5362msec 00:11:37.166 00:11:37.166 Disk stats (read/write): 00:11:37.166 nvme0n1: ios=62315/33198, merge=0/0, ticks=482574/221546, in_queue=704120, util=98.63% 00:11:37.166 19:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:37.166 19:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:37.424 19:40:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:38.798 19:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:38.798 19:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:38.798 19:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:38.798 19:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:38.798 19:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75869 00:11:38.798 19:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:38.798 19:40:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:38.798 [global] 00:11:38.798 thread=1 00:11:38.798 invalidate=1 00:11:38.798 rw=randrw 00:11:38.798 time_based=1 00:11:38.798 runtime=6 00:11:38.798 ioengine=libaio 00:11:38.798 direct=1 00:11:38.798 bs=4096 00:11:38.798 iodepth=128 00:11:38.798 norandommap=0 00:11:38.798 numjobs=1 00:11:38.798 00:11:38.798 verify_dump=1 00:11:38.798 verify_backlog=512 00:11:38.798 verify_state_save=0 00:11:38.798 do_verify=1 00:11:38.798 verify=crc32c-intel 00:11:38.798 [job0] 00:11:38.798 filename=/dev/nvme0n1 00:11:38.798 Could not set queue depth (nvme0n1) 00:11:38.798 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.798 fio-3.35 00:11:38.798 Starting 1 thread 00:11:39.732 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:39.732 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:39.990 19:40:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:41.361 19:40:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:41.361 19:40:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:41.361 19:40:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:41.361 19:40:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:41.361 19:40:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:41.619 19:40:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:42.585 19:40:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:42.585 19:40:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:42.585 19:40:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:42.585 19:40:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75869 00:11:45.110 00:11:45.110 job0: (groupid=0, jobs=1): err= 0: pid=75890: Mon Jul 15 19:40:10 2024 00:11:45.110 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(275MiB/6004msec) 00:11:45.110 slat (usec): min=3, max=6002, avg=42.23, stdev=203.21 00:11:45.110 clat (usec): min=279, max=18049, avg=7496.47, stdev=1942.84 00:11:45.110 lat (usec): min=295, max=18059, avg=7538.70, stdev=1959.80 00:11:45.110 clat percentiles (usec): 00:11:45.110 | 1.00th=[ 2343], 5.00th=[ 3818], 10.00th=[ 4686], 20.00th=[ 6194], 00:11:45.110 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7898], 00:11:45.110 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10552], 00:11:45.110 | 99.00th=[12125], 99.50th=[12780], 99.90th=[15401], 99.95th=[16057], 00:11:45.110 | 99.99th=[17171] 00:11:45.110 bw ( KiB/s): min= 9896, max=43240, per=53.33%, avg=24994.00, stdev=9750.82, samples=11 00:11:45.110 iops : min= 2474, max=10810, avg=6248.45, stdev=2437.67, samples=11 00:11:45.110 write: IOPS=7004, BW=27.4MiB/s (28.7MB/s)(147MiB/5371msec); 0 zone resets 00:11:45.110 slat (usec): min=12, max=2077, avg=53.74, stdev=133.65 00:11:45.110 clat (usec): min=470, max=16668, avg=6236.44, stdev=1856.79 00:11:45.110 lat (usec): min=614, max=16694, avg=6290.18, stdev=1874.13 00:11:45.110 clat percentiles (usec): 00:11:45.110 | 1.00th=[ 2073], 5.00th=[ 2835], 10.00th=[ 3359], 20.00th=[ 4490], 00:11:45.110 | 30.00th=[ 5473], 40.00th=[ 6259], 50.00th=[ 6652], 60.00th=[ 6980], 00:11:45.110 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8225], 95.00th=[ 8848], 00:11:45.110 | 99.00th=[10159], 99.50th=[11076], 99.90th=[13698], 99.95th=[14615], 00:11:45.110 | 99.99th=[16581] 00:11:45.110 bw ( KiB/s): min=10136, max=42592, per=89.17%, avg=24986.64, stdev=9529.65, samples=11 00:11:45.110 iops : min= 2534, max=10648, avg=6246.55, stdev=2382.30, samples=11 00:11:45.110 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.05% 00:11:45.110 lat (msec) : 2=0.57%, 4=8.70%, 10=85.24%, 20=5.42% 00:11:45.110 cpu : usr=5.83%, sys=25.60%, ctx=7157, majf=0, minf=108 00:11:45.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:45.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.110 issued rwts: total=70343,37623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.110 00:11:45.110 Run status group 0 (all jobs): 00:11:45.110 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=275MiB (288MB), run=6004-6004msec 00:11:45.111 WRITE: bw=27.4MiB/s (28.7MB/s), 27.4MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=147MiB (154MB), run=5371-5371msec 00:11:45.111 00:11:45.111 Disk stats (read/write): 00:11:45.111 nvme0n1: ios=69289/37081, merge=0/0, ticks=485213/214767, in_queue=699980, util=98.68% 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.111 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.369 rmmod nvme_tcp 00:11:45.369 rmmod nvme_fabrics 00:11:45.369 rmmod nvme_keyring 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75579 ']' 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75579 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75579 ']' 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75579 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75579 00:11:45.369 killing process with pid 75579 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75579' 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75579 00:11:45.369 19:40:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75579 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.627 19:40:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:45.627 ************************************ 00:11:45.627 END TEST nvmf_target_multipath 00:11:45.627 ************************************ 00:11:45.627 00:11:45.654 real 0m20.401s 00:11:45.654 user 1m19.817s 00:11:45.654 sys 0m6.461s 00:11:45.654 19:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.654 19:40:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:45.654 19:40:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:45.654 19:40:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:45.654 19:40:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:45.654 19:40:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.654 19:40:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.654 ************************************ 00:11:45.654 START TEST nvmf_zcopy 00:11:45.654 ************************************ 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:45.654 * Looking for test storage... 00:11:45.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.654 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.655 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.655 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.655 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.655 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:45.912 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:45.913 Cannot find device "nvmf_tgt_br" 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.913 Cannot find device "nvmf_tgt_br2" 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:45.913 Cannot find device "nvmf_tgt_br" 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:45.913 Cannot find device "nvmf_tgt_br2" 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.913 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:46.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:46.170 00:11:46.170 --- 10.0.0.2 ping statistics --- 00:11:46.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.170 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:46.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:46.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:46.170 00:11:46.170 --- 10.0.0.3 ping statistics --- 00:11:46.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.170 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:46.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:46.170 00:11:46.170 --- 10.0.0.1 ping statistics --- 00:11:46.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.170 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:46.170 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76175 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76175 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76175 ']' 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.171 19:40:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.171 [2024-07-15 19:40:11.780982] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:46.171 [2024-07-15 19:40:11.781322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.171 [2024-07-15 19:40:11.915312] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.428 [2024-07-15 19:40:12.027733] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.428 [2024-07-15 19:40:12.027776] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.428 [2024-07-15 19:40:12.027786] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.428 [2024-07-15 19:40:12.027794] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.428 [2024-07-15 19:40:12.027801] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.428 [2024-07-15 19:40:12.027831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.359 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.360 [2024-07-15 19:40:12.860680] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.360 [2024-07-15 19:40:12.876753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.360 malloc0 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:47.360 { 00:11:47.360 "params": { 00:11:47.360 "name": "Nvme$subsystem", 00:11:47.360 "trtype": "$TEST_TRANSPORT", 00:11:47.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:47.360 "adrfam": "ipv4", 00:11:47.360 "trsvcid": "$NVMF_PORT", 00:11:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:47.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:47.360 "hdgst": ${hdgst:-false}, 00:11:47.360 "ddgst": ${ddgst:-false} 00:11:47.360 }, 00:11:47.360 "method": "bdev_nvme_attach_controller" 00:11:47.360 } 00:11:47.360 EOF 00:11:47.360 )") 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:47.360 19:40:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:47.360 "params": { 00:11:47.360 "name": "Nvme1", 00:11:47.360 "trtype": "tcp", 00:11:47.360 "traddr": "10.0.0.2", 00:11:47.360 "adrfam": "ipv4", 00:11:47.360 "trsvcid": "4420", 00:11:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:47.360 "hdgst": false, 00:11:47.360 "ddgst": false 00:11:47.360 }, 00:11:47.360 "method": "bdev_nvme_attach_controller" 00:11:47.360 }' 00:11:47.360 [2024-07-15 19:40:12.968912] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:47.360 [2024-07-15 19:40:12.969029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76226 ] 00:11:47.360 [2024-07-15 19:40:13.108924] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.617 [2024-07-15 19:40:13.220829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.874 Running I/O for 10 seconds... 00:11:57.872 00:11:57.872 Latency(us) 00:11:57.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:57.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:57.872 Verification LBA range: start 0x0 length 0x1000 00:11:57.872 Nvme1n1 : 10.02 6235.01 48.71 0.00 0.00 20462.99 3038.49 31457.28 00:11:57.872 =================================================================================================================== 00:11:57.872 Total : 6235.01 48.71 0.00 0.00 20462.99 3038.49 31457.28 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76340 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:58.130 { 00:11:58.130 "params": { 00:11:58.130 "name": "Nvme$subsystem", 00:11:58.130 "trtype": "$TEST_TRANSPORT", 00:11:58.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:58.130 "adrfam": "ipv4", 00:11:58.130 "trsvcid": "$NVMF_PORT", 00:11:58.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:58.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:58.130 "hdgst": ${hdgst:-false}, 00:11:58.130 "ddgst": ${ddgst:-false} 00:11:58.130 }, 00:11:58.130 "method": "bdev_nvme_attach_controller" 00:11:58.130 } 00:11:58.130 EOF 00:11:58.130 )") 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:58.130 [2024-07-15 19:40:23.669231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.669281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:58.130 19:40:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:58.130 "params": { 00:11:58.130 "name": "Nvme1", 00:11:58.130 "trtype": "tcp", 00:11:58.130 "traddr": "10.0.0.2", 00:11:58.130 "adrfam": "ipv4", 00:11:58.130 "trsvcid": "4420", 00:11:58.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:58.130 "hdgst": false, 00:11:58.130 "ddgst": false 00:11:58.130 }, 00:11:58.130 "method": "bdev_nvme_attach_controller" 00:11:58.130 }' 00:11:58.130 [2024-07-15 19:40:23.681216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.681249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.689179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.689209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.701190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.701222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.713222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.713253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.722959] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:11:58.130 [2024-07-15 19:40:23.723072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76340 ] 00:11:58.130 [2024-07-15 19:40:23.725195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.725221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.737209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.737243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.745238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.745268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.757240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.757269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.769220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.769249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.130 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.130 [2024-07-15 19:40:23.781243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.130 [2024-07-15 19:40:23.781272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.793253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.793282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.805244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.805276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.817243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.817271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.829260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.829288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.841266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.841303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.853244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.853273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.858233] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.131 [2024-07-15 19:40:23.865262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.865293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.877261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.877295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.889254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.889283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.131 [2024-07-15 19:40:23.901257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.131 [2024-07-15 19:40:23.901287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.131 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.913270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.913299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.921267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.921294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.929272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.929302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.937261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.937286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.949294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.949335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.957267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.957299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.965274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.965302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.976459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.389 [2024-07-15 19:40:23.977278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.977306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:23.989289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:23.989320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:24.001317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:24.001355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.389 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.389 [2024-07-15 19:40:24.013320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.389 [2024-07-15 19:40:24.013358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.025325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.025363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.037323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.037360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.049337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.049383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.061337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.061373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.073316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.073345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.085328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.085362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.097359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.097404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.109356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.109388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.121348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.121382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.133346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.133380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.145360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.145390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.390 [2024-07-15 19:40:24.157372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.390 [2024-07-15 19:40:24.157408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.390 Running I/O for 5 seconds... 00:11:58.390 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.176079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.176124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.190868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.190905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.200807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.200843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.215869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.215905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.226566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.226601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.240744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.240780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.250951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.250987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.265462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.265518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.280495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.280548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.290797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.290833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.302093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.302129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.317020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.317062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.333823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.333893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.349264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.349301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.359631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.359669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.370864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.370902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.381966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.382002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.396682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.396720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.407163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.407217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.648 [2024-07-15 19:40:24.422313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.648 [2024-07-15 19:40:24.422351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.648 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.432593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.432628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.447744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.447781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.457956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.458000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.472887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.472923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.489400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.489436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.505805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.505845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.522559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.522612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.538449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.538488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.555584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.555622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.570783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.570821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.581300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.581336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.595860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.595896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.611945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.611986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.621946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.621984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.637936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.637978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.652963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.653009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.668615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.668663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:58.907 [2024-07-15 19:40:24.678937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.907 [2024-07-15 19:40:24.678980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.907 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.166 [2024-07-15 19:40:24.693639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.166 [2024-07-15 19:40:24.693681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.711218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.711260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.726572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.726612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.743210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.743249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.758348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.758389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.773644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.773690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.783973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.784010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.798766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.798805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.814341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.814378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.825184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.825220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.836049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.836087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.853617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.853657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.864490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.864527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.879519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.879605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.890417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.890456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.906266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.906308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.916636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.916672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.931181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.931230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.167 [2024-07-15 19:40:24.941698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.167 [2024-07-15 19:40:24.941736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.167 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:24.956833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:24.956874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:24.972573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:24.972614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:24.982234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:24.982274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:24.998338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:24.998375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.013073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.013107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.029354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.029387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.044900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.044948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.061346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.061395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.071693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.071740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.086135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.086195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.096468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.096515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.111316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.111370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.126253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.126309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.142335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.142371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.160040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.160079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.175871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.175940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.192010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.192073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.438 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.438 [2024-07-15 19:40:25.210870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.438 [2024-07-15 19:40:25.210929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.226050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.226117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.242943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.242992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.258461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.258511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.268972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.269021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.283674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.283724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.294259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.294293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.308801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.308854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.319608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.319662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.334505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.334559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.344525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.344572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.359246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.359296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.369672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.369707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.384355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.384390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.394335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.394371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.730 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.730 [2024-07-15 19:40:25.409009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.730 [2024-07-15 19:40:25.409049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.731 [2024-07-15 19:40:25.424282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.731 [2024-07-15 19:40:25.424325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.731 [2024-07-15 19:40:25.434581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.731 [2024-07-15 19:40:25.434614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.731 [2024-07-15 19:40:25.448670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.731 [2024-07-15 19:40:25.448719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.731 [2024-07-15 19:40:25.464569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.731 [2024-07-15 19:40:25.464623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.731 [2024-07-15 19:40:25.475090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.731 [2024-07-15 19:40:25.475133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.731 [2024-07-15 19:40:25.489958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.731 [2024-07-15 19:40:25.490010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.731 [2024-07-15 19:40:25.507757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.731 [2024-07-15 19:40:25.507810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.731 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.522581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.522622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.532247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.532281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.548443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.548479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.566456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.566491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.581613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.581647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.591753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.591787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.602603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.602637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.620333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.620383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.635936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.635986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.652370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.652403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.670171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.670234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.684698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.684753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.991 [2024-07-15 19:40:25.694548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.991 [2024-07-15 19:40:25.694600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.991 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.992 [2024-07-15 19:40:25.708863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.992 [2024-07-15 19:40:25.708922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.992 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.992 [2024-07-15 19:40:25.721485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.992 [2024-07-15 19:40:25.721551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.992 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.992 [2024-07-15 19:40:25.731921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.992 [2024-07-15 19:40:25.731955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.992 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.992 [2024-07-15 19:40:25.742463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.992 [2024-07-15 19:40:25.742497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.992 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.992 [2024-07-15 19:40:25.757527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.992 [2024-07-15 19:40:25.757560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.992 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:59.992 [2024-07-15 19:40:25.768075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.992 [2024-07-15 19:40:25.768124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.992 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.250 [2024-07-15 19:40:25.783035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.783068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.793448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.793519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.808290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.808324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.825895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.825946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.841000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.841054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.856850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.856898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.873943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.873979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.894983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.895037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.905897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.905947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.917288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.917329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.928518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.928568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.943384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.943447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.959802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.959849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.975679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.975713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:25.990985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:25.991033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:26.007020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:26.007071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.251 [2024-07-15 19:40:26.024459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.251 [2024-07-15 19:40:26.024508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.251 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.040042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.040076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.055875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.055923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.066779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.066830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.081301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.081332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.091789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.091821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.106825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.106874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.123293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.123325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.139701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.139751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.155327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.155361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.172508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.172542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.187945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.187979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.203809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.203858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.220149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.220207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.237052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.237101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.252469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.252502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.262824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.262873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.510 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.510 [2024-07-15 19:40:26.277678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.510 [2024-07-15 19:40:26.277713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.511 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.294653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.294701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.312726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.312761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.328483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.328518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.346469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.346504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.357223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.357255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.371667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.371700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.387207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.387239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.397923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.397969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.413069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.413118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.429279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.429324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.446176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.446222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.461241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.461272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.477730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.477774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.770 [2024-07-15 19:40:26.493560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.770 [2024-07-15 19:40:26.493592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.770 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.771 [2024-07-15 19:40:26.511826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.771 [2024-07-15 19:40:26.511875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.771 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.771 [2024-07-15 19:40:26.527635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.771 [2024-07-15 19:40:26.527682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.771 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:00.771 [2024-07-15 19:40:26.544730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.771 [2024-07-15 19:40:26.544764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.771 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.560772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.560805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.570591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.570624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.585480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.585525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.595301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.595335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.610613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.610647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.628157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.628204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.643962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.644011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.660161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.660251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.677153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.677196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.694251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.694286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.709239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.709276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.725202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.725250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.744596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.744642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.759171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.759207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.769804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.769837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.784400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.784608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.030 [2024-07-15 19:40:26.795149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.030 [2024-07-15 19:40:26.795205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.030 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.031 [2024-07-15 19:40:26.810358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.031 [2024-07-15 19:40:26.810396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.828107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.828147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.843504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.843546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.855352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.855395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.872150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.872199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.888633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.888670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.906454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.906492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.921292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.921330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.936817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.936868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.948931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.948970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.965948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.965989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.982061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.982106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:26.998497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:26.998537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:27.014489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:27.014530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.289 [2024-07-15 19:40:27.024498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.289 [2024-07-15 19:40:27.024535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.289 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.290 [2024-07-15 19:40:27.039682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.290 [2024-07-15 19:40:27.039722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.290 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.290 [2024-07-15 19:40:27.058387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.290 [2024-07-15 19:40:27.058427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.290 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.547 [2024-07-15 19:40:27.072486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.072526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.089391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.089430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.104562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.104599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.121341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.121379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.136980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.137036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.152796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.152835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.168225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.168271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.178565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.178607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.193097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.193136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.210714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.210755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.226214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.226270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.241708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.241747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.259690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.259747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.274985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.275044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.285427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.285465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.299901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.299958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.310004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.310059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.548 [2024-07-15 19:40:27.324509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.548 [2024-07-15 19:40:27.324564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.548 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.339791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.339845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.356529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.356571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.371572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.371631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.387516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.387574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.402850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.402906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.419373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.419410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.435759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.435815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.452869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.452921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.468345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.468402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.484917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.484972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.501939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.805 [2024-07-15 19:40:27.501994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.805 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.805 [2024-07-15 19:40:27.518380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.806 [2024-07-15 19:40:27.518436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.806 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.806 [2024-07-15 19:40:27.535970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.806 [2024-07-15 19:40:27.536012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.806 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.806 [2024-07-15 19:40:27.550527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.806 [2024-07-15 19:40:27.550568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.806 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.806 [2024-07-15 19:40:27.565979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.806 [2024-07-15 19:40:27.566036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.806 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.806 [2024-07-15 19:40:27.575975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.806 [2024-07-15 19:40:27.576029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.806 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.590933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.590991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.607416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.607472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.617228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.617281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.633306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.633360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.649392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.649459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.666443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.666500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.682807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.682866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.699569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.699633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.716404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.716461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.731824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.731880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.743830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.743871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.761438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.761480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.776301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.776341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.788559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.788600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.804516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.804590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.823547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.823602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.063 [2024-07-15 19:40:27.838530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.063 [2024-07-15 19:40:27.838586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.063 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.856356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.856393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.871307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.871347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.888703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.888760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.903682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.903738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.913809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.913870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.929482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.929527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.944438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.944476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.954571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.954622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.968732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.968783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.979132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.979198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:27.993270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:27.993319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:28.009162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:28.009246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:28.025251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:28.025302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:28.042614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:28.042667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:28.058993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:28.059065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:28.075000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:28.075056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:28.084396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:28.084445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.322 [2024-07-15 19:40:28.098257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.322 [2024-07-15 19:40:28.098309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.322 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.114388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.114425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.130851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.130903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.148645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.148700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.163791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.163843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.173618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.173657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.189260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.189298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.206930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.206970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.222540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.222579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.239632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.239672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.255919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.255965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.274397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.274441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.289182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.289222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.298813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.298850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.579 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.579 [2024-07-15 19:40:28.309487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.579 [2024-07-15 19:40:28.309536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.580 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.580 [2024-07-15 19:40:28.323346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.580 [2024-07-15 19:40:28.323373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.580 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.580 [2024-07-15 19:40:28.339579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.580 [2024-07-15 19:40:28.339614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.580 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.580 [2024-07-15 19:40:28.355873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.580 [2024-07-15 19:40:28.355908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.580 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.372858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.372891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.383291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.383325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.398000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.398049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.414806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.414841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.431244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.431276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.448886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.448935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.464195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.464245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.474596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.474645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.488864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.488914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.504785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.504835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.521615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.521649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.536687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.536737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.551415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.551449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.566853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.566902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.582574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.582624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.599170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.599214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:02.836 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:02.836 [2024-07-15 19:40:28.616046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:02.836 [2024-07-15 19:40:28.616095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.626734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.626768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.641479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.641538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.658273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.658307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.674538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.674588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.690898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.690948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.707382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.707431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.723834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.723884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.741016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.741069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.756233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.756272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.772989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.773022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.093 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.093 [2024-07-15 19:40:28.788006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.093 [2024-07-15 19:40:28.788041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.094 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.094 [2024-07-15 19:40:28.798108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.094 [2024-07-15 19:40:28.798144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.094 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.094 [2024-07-15 19:40:28.812630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.094 [2024-07-15 19:40:28.812665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.094 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.094 [2024-07-15 19:40:28.822890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.094 [2024-07-15 19:40:28.822922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.094 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.094 [2024-07-15 19:40:28.837381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.094 [2024-07-15 19:40:28.837414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.094 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.094 [2024-07-15 19:40:28.853430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.094 [2024-07-15 19:40:28.853461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.094 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.094 [2024-07-15 19:40:28.870652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.094 [2024-07-15 19:40:28.870685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.094 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.886872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.886907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.903671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.903722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.919505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.919555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.936034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.936069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.954145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.954187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.964980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.965029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.977527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.977558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:28.995356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:28.995390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.010530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.010565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.021277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.021310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.035730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.035764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.046163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.046218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.061160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.061221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.080126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.080173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.094618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.094653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.104955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.104989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.351 [2024-07-15 19:40:29.119986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.351 [2024-07-15 19:40:29.120020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.351 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.136450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.136484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.152798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.152835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.168132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.168178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 00:12:03.609 Latency(us) 00:12:03.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.609 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:03.609 Nvme1n1 : 5.01 11651.68 91.03 0.00 0.00 10970.94 4706.68 23950.43 00:12:03.609 =================================================================================================================== 00:12:03.609 Total : 11651.68 91.03 0.00 0.00 10970.94 4706.68 23950.43 00:12:03.609 [2024-07-15 19:40:29.177275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.177304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.189273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.190119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.201311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.201355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.213306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.213350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.225308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.225350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.237314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.237362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.249323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.249365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.261331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.261386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.273328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.273367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.285344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.285391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.297332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.297373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.609 [2024-07-15 19:40:29.309327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.609 [2024-07-15 19:40:29.309366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.609 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.610 [2024-07-15 19:40:29.321337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.610 [2024-07-15 19:40:29.321378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.610 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.610 [2024-07-15 19:40:29.333350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.610 [2024-07-15 19:40:29.333393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.610 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.610 [2024-07-15 19:40:29.345336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.610 [2024-07-15 19:40:29.345374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.610 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.610 [2024-07-15 19:40:29.357354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.610 [2024-07-15 19:40:29.357397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.610 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.610 [2024-07-15 19:40:29.369358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.610 [2024-07-15 19:40:29.369412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.610 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.610 [2024-07-15 19:40:29.381361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.610 [2024-07-15 19:40:29.381401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.610 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.867 [2024-07-15 19:40:29.393355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.867 [2024-07-15 19:40:29.393405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.867 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.867 [2024-07-15 19:40:29.401345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.867 [2024-07-15 19:40:29.401392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.867 2024/07/15 19:40:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:03.867 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76340) - No such process 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76340 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.867 delay0 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.867 19:40:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:03.867 [2024-07-15 19:40:29.612358] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:10.425 Initializing NVMe Controllers 00:12:10.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:10.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:10.425 Initialization complete. Launching workers. 00:12:10.425 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 83 00:12:10.425 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 370, failed to submit 33 00:12:10.425 success 187, unsuccess 183, failed 0 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.425 rmmod nvme_tcp 00:12:10.425 rmmod nvme_fabrics 00:12:10.425 rmmod nvme_keyring 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76175 ']' 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76175 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76175 ']' 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76175 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76175 00:12:10.425 killing process with pid 76175 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76175' 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76175 00:12:10.425 19:40:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76175 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:10.425 00:12:10.425 real 0m24.766s 00:12:10.425 user 0m40.242s 00:12:10.425 sys 0m6.540s 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.425 ************************************ 00:12:10.425 19:40:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:10.425 END TEST nvmf_zcopy 00:12:10.425 ************************************ 00:12:10.425 19:40:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:10.425 19:40:36 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:10.425 19:40:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:10.425 19:40:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.425 19:40:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.425 ************************************ 00:12:10.425 START TEST nvmf_nmic 00:12:10.425 ************************************ 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:10.425 * Looking for test storage... 00:12:10.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.425 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:10.685 Cannot find device "nvmf_tgt_br" 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.685 Cannot find device "nvmf_tgt_br2" 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:10.685 Cannot find device "nvmf_tgt_br" 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:10.685 Cannot find device "nvmf_tgt_br2" 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:10.685 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:10.686 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:10.686 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:10.686 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:10.686 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:10.686 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:10.686 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:10.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:12:10.945 00:12:10.945 --- 10.0.0.2 ping statistics --- 00:12:10.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.945 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:10.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:10.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:12:10.945 00:12:10.945 --- 10.0.0.3 ping statistics --- 00:12:10.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.945 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:10.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:10.945 00:12:10.945 --- 10.0.0.1 ping statistics --- 00:12:10.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.945 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76668 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76668 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76668 ']' 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.945 19:40:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.945 [2024-07-15 19:40:36.603805] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:12:10.945 [2024-07-15 19:40:36.604418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.205 [2024-07-15 19:40:36.737347] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.205 [2024-07-15 19:40:36.864070] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.205 [2024-07-15 19:40:36.864154] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.205 [2024-07-15 19:40:36.864199] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.205 [2024-07-15 19:40:36.864208] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.205 [2024-07-15 19:40:36.864216] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.205 [2024-07-15 19:40:36.864365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.205 [2024-07-15 19:40:36.865105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.205 [2024-07-15 19:40:36.865213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.205 [2024-07-15 19:40:36.865220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.140 [2024-07-15 19:40:37.612819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.140 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 Malloc0 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 [2024-07-15 19:40:37.687069] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:12.141 test case1: single bdev can't be used in multiple subsystems 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 [2024-07-15 19:40:37.710908] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:12.141 [2024-07-15 19:40:37.710946] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:12.141 [2024-07-15 19:40:37.710958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.141 2024/07/15 19:40:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:12.141 request: 00:12:12.141 { 00:12:12.141 "method": "nvmf_subsystem_add_ns", 00:12:12.141 "params": { 00:12:12.141 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:12.141 "namespace": { 00:12:12.141 "bdev_name": "Malloc0", 00:12:12.141 "no_auto_visible": false 00:12:12.141 } 00:12:12.141 } 00:12:12.141 } 00:12:12.141 Got JSON-RPC error response 00:12:12.141 GoRPCClient: error on JSON-RPC call 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:12.141 Adding namespace failed - expected result. 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:12.141 test case2: host connect to nvmf target in multiple paths 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.141 [2024-07-15 19:40:37.723068] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.141 19:40:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:12.399 19:40:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.399 19:40:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.399 19:40:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.399 19:40:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.399 19:40:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.300 19:40:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.300 19:40:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.300 19:40:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.300 19:40:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.300 19:40:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.300 19:40:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:14.300 19:40:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:14.559 [global] 00:12:14.559 thread=1 00:12:14.559 invalidate=1 00:12:14.559 rw=write 00:12:14.559 time_based=1 00:12:14.559 runtime=1 00:12:14.559 ioengine=libaio 00:12:14.559 direct=1 00:12:14.559 bs=4096 00:12:14.559 iodepth=1 00:12:14.559 norandommap=0 00:12:14.559 numjobs=1 00:12:14.559 00:12:14.559 verify_dump=1 00:12:14.559 verify_backlog=512 00:12:14.559 verify_state_save=0 00:12:14.559 do_verify=1 00:12:14.559 verify=crc32c-intel 00:12:14.559 [job0] 00:12:14.559 filename=/dev/nvme0n1 00:12:14.559 Could not set queue depth (nvme0n1) 00:12:14.559 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.559 fio-3.35 00:12:14.559 Starting 1 thread 00:12:15.934 00:12:15.934 job0: (groupid=0, jobs=1): err= 0: pid=76778: Mon Jul 15 19:40:41 2024 00:12:15.934 read: IOPS=3338, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:12:15.934 slat (nsec): min=12466, max=59059, avg=14830.55, stdev=2115.23 00:12:15.934 clat (usec): min=128, max=3989, avg=146.77, stdev=90.92 00:12:15.934 lat (usec): min=141, max=4011, avg=161.60, stdev=91.39 00:12:15.934 clat percentiles (usec): 00:12:15.934 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 137], 00:12:15.934 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:12:15.934 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 157], 00:12:15.934 | 99.00th=[ 178], 99.50th=[ 277], 99.90th=[ 1205], 99.95th=[ 3195], 00:12:15.934 | 99.99th=[ 3982] 00:12:15.934 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:15.934 slat (usec): min=14, max=104, avg=22.13, stdev= 4.22 00:12:15.934 clat (usec): min=81, max=418, avg=102.98, stdev=10.00 00:12:15.934 lat (usec): min=109, max=442, avg=125.11, stdev=11.56 00:12:15.934 clat percentiles (usec): 00:12:15.934 | 1.00th=[ 94], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 98], 00:12:15.934 | 30.00th=[ 99], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 103], 00:12:15.934 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 116], 00:12:15.934 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 249], 99.95th=[ 285], 00:12:15.934 | 99.99th=[ 420] 00:12:15.934 bw ( KiB/s): min=15688, max=15688, per=100.00%, avg=15688.00, stdev= 0.00, samples=1 00:12:15.934 iops : min= 3922, max= 3922, avg=3922.00, stdev= 0.00, samples=1 00:12:15.934 lat (usec) : 100=19.32%, 250=80.32%, 500=0.29%, 750=0.01% 00:12:15.934 lat (msec) : 2=0.03%, 4=0.03% 00:12:15.934 cpu : usr=2.70%, sys=9.20%, ctx=6927, majf=0, minf=2 00:12:15.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.934 issued rwts: total=3342,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.934 00:12:15.934 Run status group 0 (all jobs): 00:12:15.934 READ: bw=13.0MiB/s (13.7MB/s), 13.0MiB/s-13.0MiB/s (13.7MB/s-13.7MB/s), io=13.1MiB (13.7MB), run=1001-1001msec 00:12:15.934 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:12:15.934 00:12:15.934 Disk stats (read/write): 00:12:15.934 nvme0n1: ios=3122/3105, merge=0/0, ticks=478/354, in_queue=832, util=90.48% 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.934 rmmod nvme_tcp 00:12:15.934 rmmod nvme_fabrics 00:12:15.934 rmmod nvme_keyring 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76668 ']' 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76668 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76668 ']' 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76668 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76668 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:15.934 killing process with pid 76668 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76668' 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76668 00:12:15.934 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76668 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:16.193 00:12:16.193 real 0m5.816s 00:12:16.193 user 0m19.585s 00:12:16.193 sys 0m1.349s 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.193 19:40:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:16.193 ************************************ 00:12:16.193 END TEST nvmf_nmic 00:12:16.193 ************************************ 00:12:16.193 19:40:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:16.193 19:40:41 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:16.193 19:40:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:16.193 19:40:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.193 19:40:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:16.452 ************************************ 00:12:16.452 START TEST nvmf_fio_target 00:12:16.452 ************************************ 00:12:16.452 19:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:16.452 * Looking for test storage... 00:12:16.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:16.452 Cannot find device "nvmf_tgt_br" 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.452 Cannot find device "nvmf_tgt_br2" 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:16.452 Cannot find device "nvmf_tgt_br" 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:16.452 Cannot find device "nvmf_tgt_br2" 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:12:16.452 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:16.453 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:16.453 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:16.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:12:16.711 00:12:16.711 --- 10.0.0.2 ping statistics --- 00:12:16.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.711 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:16.711 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:16.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:12:16.711 00:12:16.712 --- 10.0.0.3 ping statistics --- 00:12:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.712 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:16.712 00:12:16.712 --- 10.0.0.1 ping statistics --- 00:12:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.712 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76955 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76955 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76955 ']' 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.712 19:40:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.970 [2024-07-15 19:40:42.525060] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:12:16.970 [2024-07-15 19:40:42.525182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.970 [2024-07-15 19:40:42.664377] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.229 [2024-07-15 19:40:42.783538] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.229 [2024-07-15 19:40:42.783598] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.229 [2024-07-15 19:40:42.783616] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.229 [2024-07-15 19:40:42.783629] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.229 [2024-07-15 19:40:42.783640] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.229 [2024-07-15 19:40:42.783802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.229 [2024-07-15 19:40:42.784019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.229 [2024-07-15 19:40:42.784743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.229 [2024-07-15 19:40:42.784778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.794 19:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.794 19:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:17.794 19:40:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.794 19:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.794 19:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.794 19:40:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.794 19:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:18.051 [2024-07-15 19:40:43.746932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.051 19:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:18.307 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:18.307 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:18.564 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:18.564 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:19.130 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:19.130 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:19.387 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:19.388 19:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:19.645 19:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:19.902 19:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:19.902 19:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.160 19:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:20.160 19:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.417 19:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:20.417 19:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:20.674 19:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.932 19:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:20.932 19:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.190 19:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:21.190 19:40:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.447 19:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.706 [2024-07-15 19:40:47.377724] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.706 19:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:21.962 19:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:22.220 19:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.478 19:40:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:22.478 19:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.478 19:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.478 19:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:22.478 19:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:22.478 19:40:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:25.001 19:40:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:25.001 19:40:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:25.001 19:40:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.001 19:40:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:25.001 19:40:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.001 19:40:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:25.001 19:40:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:25.001 [global] 00:12:25.001 thread=1 00:12:25.001 invalidate=1 00:12:25.001 rw=write 00:12:25.001 time_based=1 00:12:25.001 runtime=1 00:12:25.001 ioengine=libaio 00:12:25.001 direct=1 00:12:25.001 bs=4096 00:12:25.001 iodepth=1 00:12:25.001 norandommap=0 00:12:25.001 numjobs=1 00:12:25.001 00:12:25.001 verify_dump=1 00:12:25.001 verify_backlog=512 00:12:25.001 verify_state_save=0 00:12:25.001 do_verify=1 00:12:25.001 verify=crc32c-intel 00:12:25.001 [job0] 00:12:25.001 filename=/dev/nvme0n1 00:12:25.001 [job1] 00:12:25.001 filename=/dev/nvme0n2 00:12:25.001 [job2] 00:12:25.001 filename=/dev/nvme0n3 00:12:25.001 [job3] 00:12:25.001 filename=/dev/nvme0n4 00:12:25.001 Could not set queue depth (nvme0n1) 00:12:25.001 Could not set queue depth (nvme0n2) 00:12:25.001 Could not set queue depth (nvme0n3) 00:12:25.001 Could not set queue depth (nvme0n4) 00:12:25.001 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.001 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.001 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.001 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:25.001 fio-3.35 00:12:25.001 Starting 4 threads 00:12:25.935 00:12:25.935 job0: (groupid=0, jobs=1): err= 0: pid=77252: Mon Jul 15 19:40:51 2024 00:12:25.935 read: IOPS=1411, BW=5646KiB/s (5782kB/s)(5652KiB/1001msec) 00:12:25.935 slat (nsec): min=6418, max=56484, avg=12632.17, stdev=7409.37 00:12:25.935 clat (usec): min=268, max=656, avg=380.76, stdev=32.21 00:12:25.935 lat (usec): min=283, max=671, avg=393.39, stdev=32.49 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:12:25.935 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:12:25.935 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 412], 95.00th=[ 441], 00:12:25.935 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 594], 99.95th=[ 660], 00:12:25.935 | 99.99th=[ 660] 00:12:25.935 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:25.935 slat (usec): min=9, max=633, avg=19.72, stdev=16.92 00:12:25.935 clat (usec): min=4, max=415, avg=266.55, stdev=47.06 00:12:25.935 lat (usec): min=126, max=637, avg=286.27, stdev=45.22 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 178], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:12:25.935 | 30.00th=[ 225], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 293], 00:12:25.935 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 326], 00:12:25.935 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 379], 99.95th=[ 416], 00:12:25.935 | 99.99th=[ 416] 00:12:25.935 bw ( KiB/s): min= 8192, max= 8192, per=33.37%, avg=8192.00, stdev= 0.00, samples=1 00:12:25.935 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:25.935 lat (usec) : 10=0.03%, 250=17.84%, 500=81.62%, 750=0.51% 00:12:25.935 cpu : usr=0.90%, sys=4.00%, ctx=3106, majf=0, minf=7 00:12:25.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.935 issued rwts: total=1413,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.935 job1: (groupid=0, jobs=1): err= 0: pid=77253: Mon Jul 15 19:40:51 2024 00:12:25.935 read: IOPS=1371, BW=5487KiB/s (5618kB/s)(5492KiB/1001msec) 00:12:25.935 slat (nsec): min=7057, max=56253, avg=16026.72, stdev=7562.98 00:12:25.935 clat (usec): min=170, max=609, avg=376.37, stdev=30.55 00:12:25.935 lat (usec): min=191, max=620, avg=392.40, stdev=30.90 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 334], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 359], 00:12:25.935 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 375], 00:12:25.935 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 400], 95.00th=[ 420], 00:12:25.935 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 611], 00:12:25.935 | 99.99th=[ 611] 00:12:25.935 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:25.935 slat (usec): min=8, max=108, avg=24.22, stdev= 9.87 00:12:25.935 clat (usec): min=85, max=7577, avg=272.59, stdev=222.09 00:12:25.935 lat (usec): min=131, max=7612, avg=296.82, stdev=221.21 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 116], 5.00th=[ 141], 10.00th=[ 188], 20.00th=[ 196], 00:12:25.935 | 30.00th=[ 241], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:12:25.935 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 334], 00:12:25.935 | 99.00th=[ 392], 99.50th=[ 424], 99.90th=[ 3818], 99.95th=[ 7570], 00:12:25.935 | 99.99th=[ 7570] 00:12:25.935 bw ( KiB/s): min= 8088, max= 8088, per=32.94%, avg=8088.00, stdev= 0.00, samples=1 00:12:25.935 iops : min= 2022, max= 2022, avg=2022.00, stdev= 0.00, samples=1 00:12:25.935 lat (usec) : 100=0.03%, 250=16.05%, 500=83.05%, 750=0.69%, 1000=0.07% 00:12:25.935 lat (msec) : 4=0.07%, 10=0.03% 00:12:25.935 cpu : usr=1.80%, sys=4.00%, ctx=3078, majf=0, minf=9 00:12:25.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.935 issued rwts: total=1373,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.935 job2: (groupid=0, jobs=1): err= 0: pid=77255: Mon Jul 15 19:40:51 2024 00:12:25.935 read: IOPS=1411, BW=5644KiB/s (5779kB/s)(5644KiB/1000msec) 00:12:25.935 slat (nsec): min=7135, max=81584, avg=10058.01, stdev=5254.47 00:12:25.935 clat (usec): min=302, max=575, avg=383.93, stdev=31.20 00:12:25.935 lat (usec): min=329, max=588, avg=393.99, stdev=32.46 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:12:25.935 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 388], 00:12:25.935 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 412], 95.00th=[ 437], 00:12:25.935 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 578], 99.95th=[ 578], 00:12:25.935 | 99.99th=[ 578] 00:12:25.935 write: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec); 0 zone resets 00:12:25.935 slat (nsec): min=9309, max=62092, avg=18272.59, stdev=5423.51 00:12:25.935 clat (usec): min=149, max=715, avg=268.10, stdev=47.83 00:12:25.935 lat (usec): min=170, max=730, avg=286.37, stdev=44.87 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:12:25.935 | 30.00th=[ 225], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 293], 00:12:25.935 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 326], 00:12:25.935 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 420], 99.95th=[ 717], 00:12:25.935 | 99.99th=[ 717] 00:12:25.935 bw ( KiB/s): min= 8208, max= 8208, per=33.43%, avg=8208.00, stdev= 0.00, samples=1 00:12:25.935 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:12:25.935 lat (usec) : 250=17.71%, 500=81.64%, 750=0.64% 00:12:25.935 cpu : usr=1.20%, sys=3.20%, ctx=2999, majf=0, minf=10 00:12:25.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.935 issued rwts: total=1411,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.935 job3: (groupid=0, jobs=1): err= 0: pid=77256: Mon Jul 15 19:40:51 2024 00:12:25.935 read: IOPS=1383, BW=5534KiB/s (5667kB/s)(5540KiB/1001msec) 00:12:25.935 slat (nsec): min=8005, max=91200, avg=13284.70, stdev=5430.70 00:12:25.935 clat (usec): min=266, max=604, avg=378.65, stdev=27.66 00:12:25.935 lat (usec): min=277, max=616, avg=391.94, stdev=27.93 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 334], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 363], 00:12:25.935 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:12:25.935 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 408], 95.00th=[ 424], 00:12:25.935 | 99.00th=[ 498], 99.50th=[ 545], 99.90th=[ 594], 99.95th=[ 603], 00:12:25.935 | 99.99th=[ 603] 00:12:25.935 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:25.935 slat (usec): min=4, max=2026, avg=26.33, stdev=51.99 00:12:25.935 clat (usec): min=4, max=3787, avg=268.08, stdev=106.63 00:12:25.935 lat (usec): min=144, max=3829, avg=294.41, stdev=113.59 00:12:25.935 clat percentiles (usec): 00:12:25.935 | 1.00th=[ 124], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 202], 00:12:25.935 | 30.00th=[ 251], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:12:25.936 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 306], 95.00th=[ 326], 00:12:25.936 | 99.00th=[ 371], 99.50th=[ 408], 99.90th=[ 930], 99.95th=[ 3785], 00:12:25.936 | 99.99th=[ 3785] 00:12:25.936 bw ( KiB/s): min= 8192, max= 8192, per=33.37%, avg=8192.00, stdev= 0.00, samples=1 00:12:25.936 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:25.936 lat (usec) : 10=0.03%, 250=15.61%, 500=83.81%, 750=0.45%, 1000=0.07% 00:12:25.936 lat (msec) : 4=0.03% 00:12:25.936 cpu : usr=1.30%, sys=4.50%, ctx=2972, majf=0, minf=9 00:12:25.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.936 issued rwts: total=1385,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.936 00:12:25.936 Run status group 0 (all jobs): 00:12:25.936 READ: bw=21.8MiB/s (22.8MB/s), 5487KiB/s-5646KiB/s (5618kB/s-5782kB/s), io=21.8MiB (22.9MB), run=1000-1001msec 00:12:25.936 WRITE: bw=24.0MiB/s (25.1MB/s), 6138KiB/s-6144KiB/s (6285kB/s-6291kB/s), io=24.0MiB (25.2MB), run=1000-1001msec 00:12:25.936 00:12:25.936 Disk stats (read/write): 00:12:25.936 nvme0n1: ios=1097/1536, merge=0/0, ticks=401/397, in_queue=798, util=87.58% 00:12:25.936 nvme0n2: ios=1064/1505, merge=0/0, ticks=411/405, in_queue=816, util=87.42% 00:12:25.936 nvme0n3: ios=1045/1536, merge=0/0, ticks=372/387, in_queue=759, util=89.09% 00:12:25.936 nvme0n4: ios=1024/1528, merge=0/0, ticks=370/414, in_queue=784, util=89.75% 00:12:25.936 19:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:25.936 [global] 00:12:25.936 thread=1 00:12:25.936 invalidate=1 00:12:25.936 rw=randwrite 00:12:25.936 time_based=1 00:12:25.936 runtime=1 00:12:25.936 ioengine=libaio 00:12:25.936 direct=1 00:12:25.936 bs=4096 00:12:25.936 iodepth=1 00:12:25.936 norandommap=0 00:12:25.936 numjobs=1 00:12:25.936 00:12:25.936 verify_dump=1 00:12:25.936 verify_backlog=512 00:12:25.936 verify_state_save=0 00:12:25.936 do_verify=1 00:12:25.936 verify=crc32c-intel 00:12:25.936 [job0] 00:12:25.936 filename=/dev/nvme0n1 00:12:25.936 [job1] 00:12:25.936 filename=/dev/nvme0n2 00:12:25.936 [job2] 00:12:25.936 filename=/dev/nvme0n3 00:12:25.936 [job3] 00:12:25.936 filename=/dev/nvme0n4 00:12:25.936 Could not set queue depth (nvme0n1) 00:12:25.936 Could not set queue depth (nvme0n2) 00:12:25.936 Could not set queue depth (nvme0n3) 00:12:25.936 Could not set queue depth (nvme0n4) 00:12:26.195 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.195 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.195 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.195 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.195 fio-3.35 00:12:26.195 Starting 4 threads 00:12:27.134 00:12:27.134 job0: (groupid=0, jobs=1): err= 0: pid=77313: Mon Jul 15 19:40:52 2024 00:12:27.134 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:27.134 slat (nsec): min=12557, max=61249, avg=16495.83, stdev=3805.61 00:12:27.134 clat (usec): min=150, max=2862, avg=308.61, stdev=103.14 00:12:27.134 lat (usec): min=164, max=2886, avg=325.11, stdev=103.88 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 285], 00:12:27.134 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:12:27.134 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 396], 00:12:27.134 | 99.00th=[ 445], 99.50th=[ 478], 99.90th=[ 2737], 99.95th=[ 2868], 00:12:27.134 | 99.99th=[ 2868] 00:12:27.134 write: IOPS=1953, BW=7812KiB/s (8000kB/s)(7820KiB/1001msec); 0 zone resets 00:12:27.134 slat (usec): min=18, max=136, avg=28.13, stdev= 9.52 00:12:27.134 clat (usec): min=110, max=504, avg=224.10, stdev=23.01 00:12:27.134 lat (usec): min=134, max=529, avg=252.23, stdev=22.21 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 147], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:12:27.134 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:12:27.134 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:12:27.134 | 99.00th=[ 281], 99.50th=[ 318], 99.90th=[ 424], 99.95th=[ 506], 00:12:27.134 | 99.99th=[ 506] 00:12:27.134 bw ( KiB/s): min= 8192, max= 8192, per=20.20%, avg=8192.00, stdev= 0.00, samples=1 00:12:27.134 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:27.134 lat (usec) : 250=52.39%, 500=47.38%, 750=0.09%, 1000=0.06% 00:12:27.134 lat (msec) : 2=0.03%, 4=0.06% 00:12:27.134 cpu : usr=1.50%, sys=5.90%, ctx=3511, majf=0, minf=12 00:12:27.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 issued rwts: total=1536,1955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.134 job1: (groupid=0, jobs=1): err= 0: pid=77314: Mon Jul 15 19:40:52 2024 00:12:27.134 read: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:12:27.134 slat (nsec): min=12988, max=45396, avg=18341.99, stdev=4180.13 00:12:27.134 clat (usec): min=140, max=515, avg=162.69, stdev=11.45 00:12:27.134 lat (usec): min=155, max=546, avg=181.03, stdev=12.55 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:12:27.134 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 163], 00:12:27.134 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 178], 00:12:27.134 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 277], 99.95th=[ 347], 00:12:27.134 | 99.99th=[ 515] 00:12:27.134 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:27.134 slat (usec): min=19, max=255, avg=25.11, stdev= 7.71 00:12:27.134 clat (usec): min=14, max=281, avg=123.34, stdev= 9.98 00:12:27.134 lat (usec): min=124, max=320, avg=148.45, stdev=12.81 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 117], 00:12:27.134 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:12:27.134 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 139], 00:12:27.134 | 99.00th=[ 149], 99.50th=[ 151], 99.90th=[ 235], 99.95th=[ 265], 00:12:27.134 | 99.99th=[ 281] 00:12:27.134 bw ( KiB/s): min=12288, max=12288, per=30.31%, avg=12288.00, stdev= 0.00, samples=1 00:12:27.134 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:27.134 lat (usec) : 20=0.02%, 250=99.90%, 500=0.07%, 750=0.02% 00:12:27.134 cpu : usr=2.50%, sys=9.80%, ctx=6010, majf=0, minf=5 00:12:27.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 issued rwts: total=2937,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.134 job2: (groupid=0, jobs=1): err= 0: pid=77315: Mon Jul 15 19:40:52 2024 00:12:27.134 read: IOPS=1552, BW=6210KiB/s (6359kB/s)(6216KiB/1001msec) 00:12:27.134 slat (nsec): min=16034, max=53390, avg=20455.81, stdev=3621.65 00:12:27.134 clat (usec): min=159, max=641, avg=289.98, stdev=27.54 00:12:27.134 lat (usec): min=176, max=666, avg=310.44, stdev=27.85 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 182], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 277], 00:12:27.134 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 289], 60.00th=[ 293], 00:12:27.134 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 310], 95.00th=[ 318], 00:12:27.134 | 99.00th=[ 359], 99.50th=[ 445], 99.90th=[ 537], 99.95th=[ 644], 00:12:27.134 | 99.99th=[ 644] 00:12:27.134 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:27.134 slat (nsec): min=21345, max=88159, avg=27089.85, stdev=4171.02 00:12:27.134 clat (usec): min=119, max=977, avg=221.71, stdev=36.23 00:12:27.134 lat (usec): min=143, max=1020, avg=248.80, stdev=36.63 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 131], 5.00th=[ 178], 10.00th=[ 206], 20.00th=[ 212], 00:12:27.134 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:12:27.134 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 249], 00:12:27.134 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 611], 99.95th=[ 938], 00:12:27.134 | 99.99th=[ 979] 00:12:27.134 bw ( KiB/s): min= 8192, max= 8192, per=20.20%, avg=8192.00, stdev= 0.00, samples=1 00:12:27.134 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:27.134 lat (usec) : 250=55.27%, 500=44.53%, 750=0.14%, 1000=0.06% 00:12:27.134 cpu : usr=1.40%, sys=6.70%, ctx=3603, majf=0, minf=15 00:12:27.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 issued rwts: total=1554,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.134 job3: (groupid=0, jobs=1): err= 0: pid=77316: Mon Jul 15 19:40:52 2024 00:12:27.134 read: IOPS=2731, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:12:27.134 slat (nsec): min=13534, max=34818, avg=15290.59, stdev=1609.75 00:12:27.134 clat (usec): min=152, max=575, avg=171.09, stdev=12.53 00:12:27.134 lat (usec): min=167, max=589, avg=186.38, stdev=12.74 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 163], 00:12:27.134 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:12:27.134 | 70.00th=[ 176], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 186], 00:12:27.134 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 277], 99.95th=[ 322], 00:12:27.134 | 99.99th=[ 578] 00:12:27.134 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:27.134 slat (nsec): min=19446, max=93289, avg=22248.85, stdev=4312.11 00:12:27.134 clat (usec): min=113, max=1829, avg=134.17, stdev=35.00 00:12:27.134 lat (usec): min=133, max=1850, avg=156.42, stdev=35.62 00:12:27.134 clat percentiles (usec): 00:12:27.134 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 126], 00:12:27.134 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:12:27.134 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 151], 00:12:27.134 | 99.00th=[ 161], 99.50th=[ 188], 99.90th=[ 400], 99.95th=[ 562], 00:12:27.134 | 99.99th=[ 1827] 00:12:27.134 bw ( KiB/s): min=12288, max=12288, per=30.31%, avg=12288.00, stdev= 0.00, samples=1 00:12:27.134 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:27.134 lat (usec) : 250=99.71%, 500=0.24%, 750=0.03% 00:12:27.134 lat (msec) : 2=0.02% 00:12:27.134 cpu : usr=1.50%, sys=8.90%, ctx=5807, majf=0, minf=13 00:12:27.134 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:27.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.134 issued rwts: total=2734,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.134 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:27.134 00:12:27.134 Run status group 0 (all jobs): 00:12:27.134 READ: bw=34.2MiB/s (35.8MB/s), 6138KiB/s-11.5MiB/s (6285kB/s-12.0MB/s), io=34.2MiB (35.9MB), run=1001-1001msec 00:12:27.134 WRITE: bw=39.6MiB/s (41.5MB/s), 7812KiB/s-12.0MiB/s (8000kB/s-12.6MB/s), io=39.6MiB (41.6MB), run=1001-1001msec 00:12:27.134 00:12:27.135 Disk stats (read/write): 00:12:27.135 nvme0n1: ios=1513/1536, merge=0/0, ticks=493/364, in_queue=857, util=88.28% 00:12:27.135 nvme0n2: ios=2598/2615, merge=0/0, ticks=448/346, in_queue=794, util=89.37% 00:12:27.135 nvme0n3: ios=1522/1536, merge=0/0, ticks=447/356, in_queue=803, util=89.27% 00:12:27.135 nvme0n4: ios=2449/2560, merge=0/0, ticks=431/368, in_queue=799, util=89.82% 00:12:27.135 19:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:27.393 [global] 00:12:27.393 thread=1 00:12:27.393 invalidate=1 00:12:27.393 rw=write 00:12:27.393 time_based=1 00:12:27.393 runtime=1 00:12:27.393 ioengine=libaio 00:12:27.393 direct=1 00:12:27.393 bs=4096 00:12:27.393 iodepth=128 00:12:27.393 norandommap=0 00:12:27.393 numjobs=1 00:12:27.393 00:12:27.393 verify_dump=1 00:12:27.393 verify_backlog=512 00:12:27.393 verify_state_save=0 00:12:27.393 do_verify=1 00:12:27.393 verify=crc32c-intel 00:12:27.393 [job0] 00:12:27.393 filename=/dev/nvme0n1 00:12:27.393 [job1] 00:12:27.393 filename=/dev/nvme0n2 00:12:27.393 [job2] 00:12:27.393 filename=/dev/nvme0n3 00:12:27.393 [job3] 00:12:27.393 filename=/dev/nvme0n4 00:12:27.393 Could not set queue depth (nvme0n1) 00:12:27.393 Could not set queue depth (nvme0n2) 00:12:27.393 Could not set queue depth (nvme0n3) 00:12:27.393 Could not set queue depth (nvme0n4) 00:12:27.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:27.393 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:27.393 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:27.393 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:27.393 fio-3.35 00:12:27.393 Starting 4 threads 00:12:28.768 00:12:28.768 job0: (groupid=0, jobs=1): err= 0: pid=77371: Mon Jul 15 19:40:54 2024 00:12:28.768 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:12:28.768 slat (usec): min=6, max=4016, avg=95.31, stdev=426.44 00:12:28.768 clat (usec): min=7508, max=15310, avg=12618.03, stdev=1076.67 00:12:28.768 lat (usec): min=7518, max=18173, avg=12713.35, stdev=1021.47 00:12:28.768 clat percentiles (usec): 00:12:28.768 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11863], 00:12:28.768 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:12:28.768 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14222], 00:12:28.768 | 99.00th=[14615], 99.50th=[14746], 99.90th=[15270], 99.95th=[15270], 00:12:28.768 | 99.99th=[15270] 00:12:28.768 write: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.2MiB/1002msec); 0 zone resets 00:12:28.768 slat (usec): min=11, max=3016, avg=91.46, stdev=378.32 00:12:28.768 clat (usec): min=1879, max=14933, avg=11998.46, stdev=1534.03 00:12:28.768 lat (usec): min=1897, max=14953, avg=12089.92, stdev=1531.77 00:12:28.768 clat percentiles (usec): 00:12:28.768 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10552], 20.00th=[10945], 00:12:28.768 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:12:28.768 | 70.00th=[12911], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:12:28.768 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14877], 99.95th=[14877], 00:12:28.768 | 99.99th=[14877] 00:12:28.768 bw ( KiB/s): min=20480, max=20480, per=27.48%, avg=20480.00, stdev= 0.00, samples=2 00:12:28.768 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:12:28.768 lat (msec) : 2=0.06%, 4=0.19%, 10=2.77%, 20=96.98% 00:12:28.768 cpu : usr=5.00%, sys=13.49%, ctx=544, majf=0, minf=5 00:12:28.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:28.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.768 issued rwts: total=5120,5170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.768 job1: (groupid=0, jobs=1): err= 0: pid=77372: Mon Jul 15 19:40:54 2024 00:12:28.768 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:12:28.768 slat (usec): min=4, max=5687, avg=95.27, stdev=480.25 00:12:28.768 clat (usec): min=7878, max=19545, avg=12527.52, stdev=1519.57 00:12:28.768 lat (usec): min=7897, max=19581, avg=12622.79, stdev=1571.01 00:12:28.768 clat percentiles (usec): 00:12:28.768 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10683], 20.00th=[11207], 00:12:28.768 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:12:28.768 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14615], 95.00th=[15270], 00:12:28.768 | 99.00th=[16581], 99.50th=[17171], 99.90th=[18220], 99.95th=[18220], 00:12:28.768 | 99.99th=[19530] 00:12:28.768 write: IOPS=5262, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1004msec); 0 zone resets 00:12:28.768 slat (usec): min=10, max=5450, avg=89.93, stdev=468.92 00:12:28.768 clat (usec): min=424, max=18333, avg=11893.33, stdev=1604.62 00:12:28.768 lat (usec): min=3820, max=18362, avg=11983.26, stdev=1658.15 00:12:28.768 clat percentiles (usec): 00:12:28.768 | 1.00th=[ 5080], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:12:28.768 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:12:28.768 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[14222], 00:12:28.768 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:12:28.768 | 99.99th=[18220] 00:12:28.768 bw ( KiB/s): min=20480, max=20768, per=27.67%, avg=20624.00, stdev=203.65, samples=2 00:12:28.768 iops : min= 5120, max= 5192, avg=5156.00, stdev=50.91, samples=2 00:12:28.768 lat (usec) : 500=0.01% 00:12:28.768 lat (msec) : 4=0.08%, 10=6.24%, 20=93.68% 00:12:28.768 cpu : usr=4.29%, sys=14.26%, ctx=485, majf=0, minf=5 00:12:28.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:28.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.768 issued rwts: total=5120,5284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.768 job2: (groupid=0, jobs=1): err= 0: pid=77373: Mon Jul 15 19:40:54 2024 00:12:28.768 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1004msec) 00:12:28.768 slat (usec): min=5, max=8972, avg=130.73, stdev=710.74 00:12:28.768 clat (usec): min=3322, max=31820, avg=17003.47, stdev=4503.19 00:12:28.768 lat (usec): min=6083, max=31836, avg=17134.20, stdev=4540.05 00:12:28.768 clat percentiles (usec): 00:12:28.768 | 1.00th=[11207], 5.00th=[12911], 10.00th=[13566], 20.00th=[13960], 00:12:28.768 | 30.00th=[14222], 40.00th=[14484], 50.00th=[15533], 60.00th=[16909], 00:12:28.768 | 70.00th=[17433], 80.00th=[18482], 90.00th=[25822], 95.00th=[28181], 00:12:28.768 | 99.00th=[30278], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:12:28.768 | 99.99th=[31851] 00:12:28.768 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:12:28.768 slat (usec): min=6, max=6967, avg=122.20, stdev=578.62 00:12:28.768 clat (usec): min=6180, max=27120, avg=16027.80, stdev=3334.82 00:12:28.768 lat (usec): min=6200, max=27839, avg=16150.00, stdev=3350.05 00:12:28.768 clat percentiles (usec): 00:12:28.768 | 1.00th=[10159], 5.00th=[11731], 10.00th=[12911], 20.00th=[13829], 00:12:28.768 | 30.00th=[14222], 40.00th=[14353], 50.00th=[15795], 60.00th=[16450], 00:12:28.768 | 70.00th=[16712], 80.00th=[17433], 90.00th=[21365], 95.00th=[22414], 00:12:28.768 | 99.00th=[26608], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:12:28.768 | 99.99th=[27132] 00:12:28.768 bw ( KiB/s): min=15424, max=16384, per=21.34%, avg=15904.00, stdev=678.82, samples=2 00:12:28.768 iops : min= 3856, max= 4096, avg=3976.00, stdev=169.71, samples=2 00:12:28.768 lat (msec) : 4=0.01%, 10=0.48%, 20=85.46%, 50=14.05% 00:12:28.768 cpu : usr=3.39%, sys=11.17%, ctx=411, majf=0, minf=9 00:12:28.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:28.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.768 issued rwts: total=3591,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.768 job3: (groupid=0, jobs=1): err= 0: pid=77374: Mon Jul 15 19:40:54 2024 00:12:28.768 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:12:28.768 slat (usec): min=3, max=8030, avg=120.72, stdev=596.44 00:12:28.768 clat (usec): min=10140, max=32849, avg=15926.40, stdev=4321.06 00:12:28.769 lat (usec): min=11236, max=32862, avg=16047.12, stdev=4319.83 00:12:28.769 clat percentiles (usec): 00:12:28.769 | 1.00th=[11338], 5.00th=[12387], 10.00th=[13435], 20.00th=[14091], 00:12:28.769 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:12:28.769 | 70.00th=[14746], 80.00th=[15270], 90.00th=[24773], 95.00th=[27395], 00:12:28.769 | 99.00th=[30016], 99.50th=[30802], 99.90th=[32900], 99.95th=[32900], 00:12:28.769 | 99.99th=[32900] 00:12:28.769 write: IOPS=4141, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1004msec); 0 zone resets 00:12:28.769 slat (usec): min=8, max=5262, avg=114.01, stdev=483.17 00:12:28.769 clat (usec): min=3022, max=27638, avg=14776.63, stdev=3368.95 00:12:28.769 lat (usec): min=3044, max=27676, avg=14890.63, stdev=3387.19 00:12:28.769 clat percentiles (usec): 00:12:28.769 | 1.00th=[11207], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:12:28.769 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14222], 60.00th=[14746], 00:12:28.769 | 70.00th=[15139], 80.00th=[15795], 90.00th=[20317], 95.00th=[23200], 00:12:28.769 | 99.00th=[26346], 99.50th=[26870], 99.90th=[27657], 99.95th=[27657], 00:12:28.769 | 99.99th=[27657] 00:12:28.769 bw ( KiB/s): min=14088, max=18680, per=21.98%, avg=16384.00, stdev=3247.03, samples=2 00:12:28.769 iops : min= 3522, max= 4670, avg=4096.00, stdev=811.76, samples=2 00:12:28.769 lat (msec) : 4=0.06%, 10=0.39%, 20=87.64%, 50=11.91% 00:12:28.769 cpu : usr=2.79%, sys=12.06%, ctx=535, majf=0, minf=12 00:12:28.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:28.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.769 issued rwts: total=4096,4158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.769 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.769 00:12:28.769 Run status group 0 (all jobs): 00:12:28.769 READ: bw=69.7MiB/s (73.1MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=70.0MiB (73.4MB), run=1002-1004msec 00:12:28.769 WRITE: bw=72.8MiB/s (76.3MB/s), 15.9MiB/s-20.6MiB/s (16.7MB/s-21.6MB/s), io=73.1MiB (76.6MB), run=1002-1004msec 00:12:28.769 00:12:28.769 Disk stats (read/write): 00:12:28.769 nvme0n1: ios=4243/4608, merge=0/0, ticks=12388/12038, in_queue=24426, util=88.08% 00:12:28.769 nvme0n2: ios=4142/4608, merge=0/0, ticks=24693/23793, in_queue=48486, util=89.17% 00:12:28.769 nvme0n3: ios=3303/3584, merge=0/0, ticks=15750/15636, in_queue=31386, util=88.53% 00:12:28.769 nvme0n4: ios=3584/3943, merge=0/0, ticks=11843/12414, in_queue=24257, util=89.71% 00:12:28.769 19:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:28.769 [global] 00:12:28.769 thread=1 00:12:28.769 invalidate=1 00:12:28.769 rw=randwrite 00:12:28.769 time_based=1 00:12:28.769 runtime=1 00:12:28.769 ioengine=libaio 00:12:28.769 direct=1 00:12:28.769 bs=4096 00:12:28.769 iodepth=128 00:12:28.769 norandommap=0 00:12:28.769 numjobs=1 00:12:28.769 00:12:28.769 verify_dump=1 00:12:28.769 verify_backlog=512 00:12:28.769 verify_state_save=0 00:12:28.769 do_verify=1 00:12:28.769 verify=crc32c-intel 00:12:28.769 [job0] 00:12:28.769 filename=/dev/nvme0n1 00:12:28.769 [job1] 00:12:28.769 filename=/dev/nvme0n2 00:12:28.769 [job2] 00:12:28.769 filename=/dev/nvme0n3 00:12:28.769 [job3] 00:12:28.769 filename=/dev/nvme0n4 00:12:28.769 Could not set queue depth (nvme0n1) 00:12:28.769 Could not set queue depth (nvme0n2) 00:12:28.769 Could not set queue depth (nvme0n3) 00:12:28.769 Could not set queue depth (nvme0n4) 00:12:28.769 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.769 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.769 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.769 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:28.769 fio-3.35 00:12:28.769 Starting 4 threads 00:12:30.144 00:12:30.144 job0: (groupid=0, jobs=1): err= 0: pid=77428: Mon Jul 15 19:40:55 2024 00:12:30.144 read: IOPS=5559, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1013msec) 00:12:30.144 slat (usec): min=4, max=11069, avg=93.22, stdev=603.93 00:12:30.144 clat (usec): min=4809, max=21798, avg=11753.20, stdev=2841.17 00:12:30.144 lat (usec): min=4821, max=21812, avg=11846.42, stdev=2873.89 00:12:30.144 clat percentiles (usec): 00:12:30.144 | 1.00th=[ 5342], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9765], 00:12:30.144 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:12:30.144 | 70.00th=[12518], 80.00th=[13435], 90.00th=[16188], 95.00th=[18220], 00:12:30.144 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21627], 99.95th=[21890], 00:12:30.144 | 99.99th=[21890] 00:12:30.144 write: IOPS=5755, BW=22.5MiB/s (23.6MB/s)(22.8MiB/1013msec); 0 zone resets 00:12:30.144 slat (usec): min=5, max=8674, avg=73.98, stdev=322.03 00:12:30.144 clat (usec): min=3531, max=22248, avg=10637.96, stdev=2680.30 00:12:30.144 lat (usec): min=3557, max=22256, avg=10711.94, stdev=2702.15 00:12:30.144 clat percentiles (usec): 00:12:30.144 | 1.00th=[ 4555], 5.00th=[ 5342], 10.00th=[ 6194], 20.00th=[ 8979], 00:12:30.144 | 30.00th=[ 9896], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:12:30.144 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[12911], 00:12:30.144 | 99.00th=[18744], 99.50th=[20579], 99.90th=[22152], 99.95th=[22152], 00:12:30.144 | 99.99th=[22152] 00:12:30.144 bw ( KiB/s): min=21064, max=24560, per=35.34%, avg=22812.00, stdev=2472.05, samples=2 00:12:30.144 iops : min= 5266, max= 6140, avg=5703.00, stdev=618.01, samples=2 00:12:30.144 lat (msec) : 4=0.10%, 10=26.24%, 20=72.46%, 50=1.20% 00:12:30.144 cpu : usr=5.14%, sys=13.34%, ctx=825, majf=0, minf=15 00:12:30.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:30.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:30.144 issued rwts: total=5632,5830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:30.144 job1: (groupid=0, jobs=1): err= 0: pid=77433: Mon Jul 15 19:40:55 2024 00:12:30.144 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:12:30.144 slat (usec): min=3, max=12859, avg=193.63, stdev=961.54 00:12:30.144 clat (usec): min=16302, max=36108, avg=24412.10, stdev=3466.92 00:12:30.144 lat (usec): min=16320, max=37773, avg=24605.73, stdev=3562.14 00:12:30.144 clat percentiles (usec): 00:12:30.144 | 1.00th=[16909], 5.00th=[18744], 10.00th=[21103], 20.00th=[22414], 00:12:30.144 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23987], 00:12:30.144 | 70.00th=[24511], 80.00th=[27395], 90.00th=[29230], 95.00th=[31327], 00:12:30.144 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35390], 99.95th=[35914], 00:12:30.144 | 99.99th=[35914] 00:12:30.144 write: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.2MiB/1013msec); 0 zone resets 00:12:30.144 slat (usec): min=4, max=10956, avg=169.30, stdev=896.53 00:12:30.144 clat (usec): min=11265, max=35552, avg=23014.26, stdev=4020.37 00:12:30.144 lat (usec): min=11287, max=35602, avg=23183.56, stdev=4082.92 00:12:30.144 clat percentiles (usec): 00:12:30.144 | 1.00th=[11469], 5.00th=[15139], 10.00th=[17171], 20.00th=[19792], 00:12:30.144 | 30.00th=[22152], 40.00th=[22676], 50.00th=[23725], 60.00th=[24511], 00:12:30.144 | 70.00th=[25297], 80.00th=[26084], 90.00th=[26870], 95.00th=[29230], 00:12:30.144 | 99.00th=[31065], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:12:30.144 | 99.99th=[35390] 00:12:30.144 bw ( KiB/s): min= 9576, max=12288, per=16.94%, avg=10932.00, stdev=1917.67, samples=2 00:12:30.144 iops : min= 2394, max= 3072, avg=2733.00, stdev=479.42, samples=2 00:12:30.144 lat (msec) : 20=13.71%, 50=86.29% 00:12:30.144 cpu : usr=2.27%, sys=7.21%, ctx=800, majf=0, minf=5 00:12:30.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:30.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:30.144 issued rwts: total=2560,2860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:30.144 job2: (groupid=0, jobs=1): err= 0: pid=77435: Mon Jul 15 19:40:55 2024 00:12:30.144 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:12:30.144 slat (usec): min=4, max=15102, avg=114.45, stdev=767.98 00:12:30.144 clat (usec): min=4411, max=32968, avg=14287.10, stdev=4127.42 00:12:30.144 lat (usec): min=4432, max=33002, avg=14401.54, stdev=4170.75 00:12:30.144 clat percentiles (usec): 00:12:30.145 | 1.00th=[ 6063], 5.00th=[10159], 10.00th=[10683], 20.00th=[11469], 00:12:30.145 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:12:30.145 | 70.00th=[15401], 80.00th=[17433], 90.00th=[20579], 95.00th=[22938], 00:12:30.145 | 99.00th=[27132], 99.50th=[29230], 99.90th=[32375], 99.95th=[32375], 00:12:30.145 | 99.99th=[32900] 00:12:30.145 write: IOPS=4864, BW=19.0MiB/s (19.9MB/s)(19.2MiB/1013msec); 0 zone resets 00:12:30.145 slat (usec): min=5, max=14714, avg=88.63, stdev=490.95 00:12:30.145 clat (usec): min=3116, max=32357, avg=12677.22, stdev=3326.39 00:12:30.145 lat (usec): min=3140, max=32371, avg=12765.85, stdev=3365.99 00:12:30.145 clat percentiles (usec): 00:12:30.145 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 7373], 20.00th=[10814], 00:12:30.145 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:12:30.145 | 70.00th=[13566], 80.00th=[13829], 90.00th=[16188], 95.00th=[17957], 00:12:30.145 | 99.00th=[22938], 99.50th=[24249], 99.90th=[31065], 99.95th=[31065], 00:12:30.145 | 99.99th=[32375] 00:12:30.145 bw ( KiB/s): min=18456, max=19952, per=29.75%, avg=19204.00, stdev=1057.83, samples=2 00:12:30.145 iops : min= 4614, max= 4988, avg=4801.00, stdev=264.46, samples=2 00:12:30.145 lat (msec) : 4=0.22%, 10=11.35%, 20=82.50%, 50=5.94% 00:12:30.145 cpu : usr=3.95%, sys=11.66%, ctx=660, majf=0, minf=9 00:12:30.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:30.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:30.145 issued rwts: total=4608,4928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:30.145 job3: (groupid=0, jobs=1): err= 0: pid=77436: Mon Jul 15 19:40:55 2024 00:12:30.145 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:12:30.145 slat (usec): min=4, max=11105, avg=193.24, stdev=907.75 00:12:30.145 clat (usec): min=15578, max=38784, avg=23881.42, stdev=3022.77 00:12:30.145 lat (usec): min=16490, max=38807, avg=24074.66, stdev=3107.71 00:12:30.145 clat percentiles (usec): 00:12:30.145 | 1.00th=[17695], 5.00th=[19530], 10.00th=[21365], 20.00th=[22152], 00:12:30.145 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:12:30.145 | 70.00th=[23987], 80.00th=[25822], 90.00th=[27919], 95.00th=[29230], 00:12:30.145 | 99.00th=[35390], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:12:30.145 | 99.99th=[38536] 00:12:30.145 write: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(10.7MiB/1010msec); 0 zone resets 00:12:30.145 slat (usec): min=5, max=13721, avg=178.18, stdev=908.23 00:12:30.145 clat (usec): min=8848, max=36596, avg=23934.12, stdev=3388.14 00:12:30.145 lat (usec): min=9598, max=36615, avg=24112.29, stdev=3449.56 00:12:30.145 clat percentiles (usec): 00:12:30.145 | 1.00th=[15270], 5.00th=[17695], 10.00th=[18744], 20.00th=[21627], 00:12:30.145 | 30.00th=[22676], 40.00th=[23462], 50.00th=[24249], 60.00th=[25035], 00:12:30.145 | 70.00th=[25822], 80.00th=[26608], 90.00th=[27657], 95.00th=[29230], 00:12:30.145 | 99.00th=[31327], 99.50th=[31589], 99.90th=[33817], 99.95th=[34341], 00:12:30.145 | 99.99th=[36439] 00:12:30.145 bw ( KiB/s): min= 8600, max=12208, per=16.12%, avg=10404.00, stdev=2551.24, samples=2 00:12:30.145 iops : min= 2150, max= 3052, avg=2601.00, stdev=637.81, samples=2 00:12:30.145 lat (msec) : 10=0.08%, 20=9.81%, 50=90.11% 00:12:30.145 cpu : usr=2.68%, sys=6.64%, ctx=732, majf=0, minf=17 00:12:30.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:30.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:30.145 issued rwts: total=2560,2728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:30.145 00:12:30.145 Run status group 0 (all jobs): 00:12:30.145 READ: bw=59.2MiB/s (62.1MB/s), 9.87MiB/s-21.7MiB/s (10.4MB/s-22.8MB/s), io=60.0MiB (62.9MB), run=1010-1013msec 00:12:30.145 WRITE: bw=63.0MiB/s (66.1MB/s), 10.5MiB/s-22.5MiB/s (11.1MB/s-23.6MB/s), io=63.9MiB (67.0MB), run=1010-1013msec 00:12:30.145 00:12:30.145 Disk stats (read/write): 00:12:30.145 nvme0n1: ios=4783/5120, merge=0/0, ticks=52160/52023, in_queue=104183, util=89.37% 00:12:30.145 nvme0n2: ios=2160/2560, merge=0/0, ticks=24846/27503, in_queue=52349, util=89.70% 00:12:30.145 nvme0n3: ios=4050/4096, merge=0/0, ticks=54391/50388, in_queue=104779, util=90.89% 00:12:30.145 nvme0n4: ios=2080/2495, merge=0/0, ticks=23603/27652, in_queue=51255, util=90.23% 00:12:30.145 19:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:30.145 19:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77449 00:12:30.145 19:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:30.145 19:40:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:30.145 [global] 00:12:30.145 thread=1 00:12:30.145 invalidate=1 00:12:30.145 rw=read 00:12:30.145 time_based=1 00:12:30.145 runtime=10 00:12:30.145 ioengine=libaio 00:12:30.145 direct=1 00:12:30.145 bs=4096 00:12:30.145 iodepth=1 00:12:30.145 norandommap=1 00:12:30.145 numjobs=1 00:12:30.145 00:12:30.145 [job0] 00:12:30.145 filename=/dev/nvme0n1 00:12:30.145 [job1] 00:12:30.145 filename=/dev/nvme0n2 00:12:30.145 [job2] 00:12:30.145 filename=/dev/nvme0n3 00:12:30.145 [job3] 00:12:30.145 filename=/dev/nvme0n4 00:12:30.145 Could not set queue depth (nvme0n1) 00:12:30.145 Could not set queue depth (nvme0n2) 00:12:30.145 Could not set queue depth (nvme0n3) 00:12:30.145 Could not set queue depth (nvme0n4) 00:12:30.145 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.145 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.145 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.145 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:30.145 fio-3.35 00:12:30.145 Starting 4 threads 00:12:33.428 19:40:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:33.428 fio: pid=77492, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:33.428 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=39247872, buflen=4096 00:12:33.428 19:40:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:33.428 fio: pid=77491, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:33.428 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=43888640, buflen=4096 00:12:33.428 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:33.428 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:33.690 fio: pid=77489, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:33.690 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=48353280, buflen=4096 00:12:33.690 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:33.690 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:33.948 fio: pid=77490, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:33.948 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=53665792, buflen=4096 00:12:33.948 00:12:33.948 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77489: Mon Jul 15 19:40:59 2024 00:12:33.948 read: IOPS=3503, BW=13.7MiB/s (14.3MB/s)(46.1MiB/3370msec) 00:12:33.948 slat (usec): min=7, max=9787, avg=16.83, stdev=162.26 00:12:33.948 clat (usec): min=49, max=3637, avg=267.07, stdev=67.00 00:12:33.948 lat (usec): min=146, max=10053, avg=283.90, stdev=175.20 00:12:33.948 clat percentiles (usec): 00:12:33.948 | 1.00th=[ 149], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 253], 00:12:33.948 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:12:33.948 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:12:33.948 | 99.00th=[ 379], 99.50th=[ 457], 99.90th=[ 1045], 99.95th=[ 1876], 00:12:33.948 | 99.99th=[ 2573] 00:12:33.948 bw ( KiB/s): min=13456, max=14192, per=28.03%, avg=13955.17, stdev=292.61, samples=6 00:12:33.948 iops : min= 3364, max= 3548, avg=3488.67, stdev=73.03, samples=6 00:12:33.948 lat (usec) : 50=0.01%, 250=15.80%, 500=83.74%, 750=0.31%, 1000=0.03% 00:12:33.948 lat (msec) : 2=0.07%, 4=0.04% 00:12:33.948 cpu : usr=1.19%, sys=4.10%, ctx=11817, majf=0, minf=1 00:12:33.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 issued rwts: total=11806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.948 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77490: Mon Jul 15 19:40:59 2024 00:12:33.948 read: IOPS=3607, BW=14.1MiB/s (14.8MB/s)(51.2MiB/3632msec) 00:12:33.948 slat (usec): min=8, max=15736, avg=18.99, stdev=234.75 00:12:33.948 clat (usec): min=120, max=8247, avg=256.71, stdev=122.55 00:12:33.948 lat (usec): min=145, max=15954, avg=275.70, stdev=264.60 00:12:33.948 clat percentiles (usec): 00:12:33.948 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 155], 20.00th=[ 247], 00:12:33.948 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:12:33.948 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:12:33.948 | 99.00th=[ 375], 99.50th=[ 486], 99.90th=[ 1106], 99.95th=[ 2147], 00:12:33.948 | 99.99th=[ 7963] 00:12:33.948 bw ( KiB/s): min=12936, max=17120, per=28.74%, avg=14306.71, stdev=1321.59, samples=7 00:12:33.948 iops : min= 3234, max= 4280, avg=3576.57, stdev=330.41, samples=7 00:12:33.948 lat (usec) : 250=24.41%, 500=75.09%, 750=0.31%, 1000=0.06% 00:12:33.948 lat (msec) : 2=0.05%, 4=0.05%, 10=0.02% 00:12:33.948 cpu : usr=1.05%, sys=4.49%, ctx=13125, majf=0, minf=1 00:12:33.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 issued rwts: total=13103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.948 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77491: Mon Jul 15 19:40:59 2024 00:12:33.948 read: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(41.9MiB/3134msec) 00:12:33.948 slat (usec): min=14, max=7805, avg=23.65, stdev=105.65 00:12:33.948 clat (usec): min=142, max=4258, avg=266.56, stdev=89.51 00:12:33.948 lat (usec): min=158, max=7975, avg=290.21, stdev=138.34 00:12:33.948 clat percentiles (usec): 00:12:33.948 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 192], 00:12:33.948 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:12:33.948 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:12:33.948 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 1004], 99.95th=[ 1876], 00:12:33.948 | 99.99th=[ 3785] 00:12:33.948 bw ( KiB/s): min=12384, max=16088, per=26.85%, avg=13366.33, stdev=1441.95, samples=6 00:12:33.948 iops : min= 3096, max= 4022, avg=3341.50, stdev=360.53, samples=6 00:12:33.948 lat (usec) : 250=24.50%, 500=75.24%, 750=0.07%, 1000=0.08% 00:12:33.948 lat (msec) : 2=0.06%, 4=0.04%, 10=0.01% 00:12:33.948 cpu : usr=1.31%, sys=6.32%, ctx=10722, majf=0, minf=1 00:12:33.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 issued rwts: total=10716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.948 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77492: Mon Jul 15 19:40:59 2024 00:12:33.948 read: IOPS=3306, BW=12.9MiB/s (13.5MB/s)(37.4MiB/2898msec) 00:12:33.948 slat (nsec): min=12357, max=99557, avg=18112.90, stdev=5712.24 00:12:33.948 clat (usec): min=151, max=3503, avg=282.46, stdev=69.17 00:12:33.948 lat (usec): min=167, max=3526, avg=300.58, stdev=67.83 00:12:33.948 clat percentiles (usec): 00:12:33.948 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 281], 00:12:33.948 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:12:33.948 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 326], 00:12:33.948 | 99.00th=[ 367], 99.50th=[ 408], 99.90th=[ 652], 99.95th=[ 1811], 00:12:33.948 | 99.99th=[ 3490] 00:12:33.948 bw ( KiB/s): min=12528, max=16264, per=26.81%, avg=13346.80, stdev=1632.80, samples=5 00:12:33.948 iops : min= 3132, max= 4066, avg=3336.60, stdev=408.24, samples=5 00:12:33.948 lat (usec) : 250=15.15%, 500=84.63%, 750=0.14%, 1000=0.02% 00:12:33.948 lat (msec) : 2=0.02%, 4=0.03% 00:12:33.948 cpu : usr=0.79%, sys=5.11%, ctx=9594, majf=0, minf=1 00:12:33.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:33.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.948 issued rwts: total=9583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:33.948 00:12:33.948 Run status group 0 (all jobs): 00:12:33.948 READ: bw=48.6MiB/s (51.0MB/s), 12.9MiB/s-14.1MiB/s (13.5MB/s-14.8MB/s), io=177MiB (185MB), run=2898-3632msec 00:12:33.948 00:12:33.948 Disk stats (read/write): 00:12:33.948 nvme0n1: ios=11743/0, merge=0/0, ticks=3120/0, in_queue=3120, util=95.02% 00:12:33.948 nvme0n2: ios=12933/0, merge=0/0, ticks=3340/0, in_queue=3340, util=94.75% 00:12:33.948 nvme0n3: ios=10553/0, merge=0/0, ticks=2913/0, in_queue=2913, util=96.32% 00:12:33.948 nvme0n4: ios=9444/0, merge=0/0, ticks=2729/0, in_queue=2729, util=96.78% 00:12:33.948 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:33.948 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:34.206 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:34.206 19:40:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:34.775 19:41:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:34.775 19:41:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:34.775 19:41:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:34.775 19:41:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:35.341 19:41:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:35.341 19:41:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77449 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.598 nvmf hotplug test: fio failed as expected 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:35.598 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.856 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.113 rmmod nvme_tcp 00:12:36.113 rmmod nvme_fabrics 00:12:36.113 rmmod nvme_keyring 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76955 ']' 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76955 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76955 ']' 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76955 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76955 00:12:36.113 killing process with pid 76955 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76955' 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76955 00:12:36.113 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76955 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:36.371 ************************************ 00:12:36.371 END TEST nvmf_fio_target 00:12:36.371 ************************************ 00:12:36.371 00:12:36.371 real 0m20.001s 00:12:36.371 user 1m17.648s 00:12:36.371 sys 0m8.217s 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.371 19:41:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.371 19:41:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:36.371 19:41:02 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:36.371 19:41:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.371 19:41:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.371 19:41:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.371 ************************************ 00:12:36.371 START TEST nvmf_bdevio 00:12:36.371 ************************************ 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:36.371 * Looking for test storage... 00:12:36.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:36.371 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.372 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:36.629 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:36.629 Cannot find device "nvmf_tgt_br" 00:12:36.629 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:12:36.629 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.629 Cannot find device "nvmf_tgt_br2" 00:12:36.629 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:36.630 Cannot find device "nvmf_tgt_br" 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:36.630 Cannot find device "nvmf_tgt_br2" 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:36.630 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:36.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:12:36.888 00:12:36.888 --- 10.0.0.2 ping statistics --- 00:12:36.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.888 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:36.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:36.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:12:36.888 00:12:36.888 --- 10.0.0.3 ping statistics --- 00:12:36.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.888 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:36.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:36.888 00:12:36.888 --- 10.0.0.1 ping statistics --- 00:12:36.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.888 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77818 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77818 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77818 ']' 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.888 19:41:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.888 [2024-07-15 19:41:02.574682] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:12:36.888 [2024-07-15 19:41:02.574776] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.147 [2024-07-15 19:41:02.716884] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.147 [2024-07-15 19:41:02.809766] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.147 [2024-07-15 19:41:02.809827] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.147 [2024-07-15 19:41:02.809838] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.147 [2024-07-15 19:41:02.809846] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.147 [2024-07-15 19:41:02.809853] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.147 [2024-07-15 19:41:02.810040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:37.147 [2024-07-15 19:41:02.810709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:37.147 [2024-07-15 19:41:02.810805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:37.147 [2024-07-15 19:41:02.810813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 [2024-07-15 19:41:03.636330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 Malloc0 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 [2024-07-15 19:41:03.717520] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:38.083 { 00:12:38.083 "params": { 00:12:38.083 "name": "Nvme$subsystem", 00:12:38.083 "trtype": "$TEST_TRANSPORT", 00:12:38.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:38.083 "adrfam": "ipv4", 00:12:38.083 "trsvcid": "$NVMF_PORT", 00:12:38.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:38.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:38.083 "hdgst": ${hdgst:-false}, 00:12:38.083 "ddgst": ${ddgst:-false} 00:12:38.083 }, 00:12:38.083 "method": "bdev_nvme_attach_controller" 00:12:38.083 } 00:12:38.083 EOF 00:12:38.083 )") 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:38.083 19:41:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:38.083 "params": { 00:12:38.083 "name": "Nvme1", 00:12:38.083 "trtype": "tcp", 00:12:38.083 "traddr": "10.0.0.2", 00:12:38.083 "adrfam": "ipv4", 00:12:38.083 "trsvcid": "4420", 00:12:38.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:38.083 "hdgst": false, 00:12:38.083 "ddgst": false 00:12:38.083 }, 00:12:38.083 "method": "bdev_nvme_attach_controller" 00:12:38.083 }' 00:12:38.083 [2024-07-15 19:41:03.775975] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:12:38.083 [2024-07-15 19:41:03.776111] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77878 ] 00:12:38.344 [2024-07-15 19:41:03.917472] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.344 [2024-07-15 19:41:04.033306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.344 [2024-07-15 19:41:04.033367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.344 [2024-07-15 19:41:04.033377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.602 I/O targets: 00:12:38.602 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:38.602 00:12:38.602 00:12:38.602 CUnit - A unit testing framework for C - Version 2.1-3 00:12:38.602 http://cunit.sourceforge.net/ 00:12:38.602 00:12:38.602 00:12:38.602 Suite: bdevio tests on: Nvme1n1 00:12:38.602 Test: blockdev write read block ...passed 00:12:38.602 Test: blockdev write zeroes read block ...passed 00:12:38.602 Test: blockdev write zeroes read no split ...passed 00:12:38.602 Test: blockdev write zeroes read split ...passed 00:12:38.602 Test: blockdev write zeroes read split partial ...passed 00:12:38.602 Test: blockdev reset ...[2024-07-15 19:41:04.332342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:38.602 [2024-07-15 19:41:04.332666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a9320 (9): Bad file descriptor 00:12:38.602 [2024-07-15 19:41:04.344083] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:38.602 passed 00:12:38.602 Test: blockdev write read 8 blocks ...passed 00:12:38.602 Test: blockdev write read size > 128k ...passed 00:12:38.602 Test: blockdev write read invalid size ...passed 00:12:38.861 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:38.861 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:38.861 Test: blockdev write read max offset ...passed 00:12:38.861 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:38.861 Test: blockdev writev readv 8 blocks ...passed 00:12:38.861 Test: blockdev writev readv 30 x 1block ...passed 00:12:38.861 Test: blockdev writev readv block ...passed 00:12:38.861 Test: blockdev writev readv size > 128k ...passed 00:12:38.861 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:38.861 Test: blockdev comparev and writev ...[2024-07-15 19:41:04.520270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.520634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.520731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.520811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.521325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.521435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.521526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.521625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.522106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.522231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.522320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.522381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.522852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.522948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.523024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:38.861 [2024-07-15 19:41:04.523092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:38.861 passed 00:12:38.861 Test: blockdev nvme passthru rw ...passed 00:12:38.861 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:41:04.605704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:38.861 [2024-07-15 19:41:04.605858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.606083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:38.861 [2024-07-15 19:41:04.606301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.606538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:38.861 [2024-07-15 19:41:04.606732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:38.861 [2024-07-15 19:41:04.606953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:38.861 [2024-07-15 19:41:04.607145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:12:38.861 Test: blockdev nvme admin passthru ...qhd:002f p:0 m:0 dnr:0 00:12:38.861 passed 00:12:39.120 Test: blockdev copy ...passed 00:12:39.120 00:12:39.120 Run Summary: Type Total Ran Passed Failed Inactive 00:12:39.120 suites 1 1 n/a 0 0 00:12:39.120 tests 23 23 23 0 0 00:12:39.120 asserts 152 152 152 0 n/a 00:12:39.120 00:12:39.120 Elapsed time = 0.900 seconds 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.120 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.378 rmmod nvme_tcp 00:12:39.378 rmmod nvme_fabrics 00:12:39.378 rmmod nvme_keyring 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77818 ']' 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77818 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77818 ']' 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77818 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77818 00:12:39.378 killing process with pid 77818 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77818' 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77818 00:12:39.378 19:41:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77818 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:39.637 00:12:39.637 real 0m3.255s 00:12:39.637 user 0m11.743s 00:12:39.637 sys 0m0.789s 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.637 ************************************ 00:12:39.637 END TEST nvmf_bdevio 00:12:39.637 19:41:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:39.637 ************************************ 00:12:39.637 19:41:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.637 19:41:05 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:39.637 19:41:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.637 19:41:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.637 19:41:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.637 ************************************ 00:12:39.637 START TEST nvmf_auth_target 00:12:39.637 ************************************ 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:39.637 * Looking for test storage... 00:12:39.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.637 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:39.896 Cannot find device "nvmf_tgt_br" 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.896 Cannot find device "nvmf_tgt_br2" 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:39.896 Cannot find device "nvmf_tgt_br" 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:39.896 Cannot find device "nvmf_tgt_br2" 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:39.896 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:39.897 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:39.897 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:39.897 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:39.897 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:39.897 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:39.897 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:39.897 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:40.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:40.155 00:12:40.155 --- 10.0.0.2 ping statistics --- 00:12:40.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.155 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:40.155 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:40.155 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:40.155 00:12:40.155 --- 10.0.0.3 ping statistics --- 00:12:40.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.155 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:40.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:12:40.155 00:12:40.155 --- 10.0.0.1 ping statistics --- 00:12:40.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.155 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=78059 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 78059 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78059 ']' 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.155 19:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=78103 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:41.090 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.091 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:41.091 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:12:41.091 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:41.091 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:41.091 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8793e51619cdc7d77c3d699413b2a3f965bded87ccafca40 00:12:41.091 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.l8U 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8793e51619cdc7d77c3d699413b2a3f965bded87ccafca40 0 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8793e51619cdc7d77c3d699413b2a3f965bded87ccafca40 0 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8793e51619cdc7d77c3d699413b2a3f965bded87ccafca40 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.l8U 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.l8U 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.l8U 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb751960b315b53a96322db5f0b4663f44c448bfcf880acb96e05ee47d836c10 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Fk5 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb751960b315b53a96322db5f0b4663f44c448bfcf880acb96e05ee47d836c10 3 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb751960b315b53a96322db5f0b4663f44c448bfcf880acb96e05ee47d836c10 3 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb751960b315b53a96322db5f0b4663f44c448bfcf880acb96e05ee47d836c10 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Fk5 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Fk5 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Fk5 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:41.350 19:41:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=83eaa653ac0574a7e112501bf1ee15bf 00:12:41.350 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:41.350 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TG4 00:12:41.350 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 83eaa653ac0574a7e112501bf1ee15bf 1 00:12:41.350 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 83eaa653ac0574a7e112501bf1ee15bf 1 00:12:41.350 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:41.350 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:41.350 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=83eaa653ac0574a7e112501bf1ee15bf 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TG4 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TG4 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.TG4 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a52ca790c27758daed6f467ff461c5f262f9eadcfd3b35e1 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VJA 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a52ca790c27758daed6f467ff461c5f262f9eadcfd3b35e1 2 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a52ca790c27758daed6f467ff461c5f262f9eadcfd3b35e1 2 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a52ca790c27758daed6f467ff461c5f262f9eadcfd3b35e1 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VJA 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VJA 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.VJA 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=08d0ce4c7189d2ce46e33367af762d43d9252ec585a89df4 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Gk0 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 08d0ce4c7189d2ce46e33367af762d43d9252ec585a89df4 2 00:12:41.351 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 08d0ce4c7189d2ce46e33367af762d43d9252ec585a89df4 2 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=08d0ce4c7189d2ce46e33367af762d43d9252ec585a89df4 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Gk0 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Gk0 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Gk0 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=621e477d48638ab1cbc1f2d42e3d9a2a 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dso 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 621e477d48638ab1cbc1f2d42e3d9a2a 1 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 621e477d48638ab1cbc1f2d42e3d9a2a 1 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=621e477d48638ab1cbc1f2d42e3d9a2a 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dso 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dso 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.dso 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6bb93b76fcb793b847757cf3bbd7ecf72a7d5fab04c007177c2b8d062a6c3ee3 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ONB 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6bb93b76fcb793b847757cf3bbd7ecf72a7d5fab04c007177c2b8d062a6c3ee3 3 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6bb93b76fcb793b847757cf3bbd7ecf72a7d5fab04c007177c2b8d062a6c3ee3 3 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6bb93b76fcb793b847757cf3bbd7ecf72a7d5fab04c007177c2b8d062a6c3ee3 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ONB 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ONB 00:12:41.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ONB 00:12:41.610 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:12:41.611 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 78059 00:12:41.611 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78059 ']' 00:12:41.611 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.611 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.611 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.611 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.611 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 78103 /var/tmp/host.sock 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78103 ']' 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.869 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.127 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.127 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:42.127 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:12:42.127 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.127 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.l8U 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.l8U 00:12:42.386 19:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.l8U 00:12:42.386 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Fk5 ]] 00:12:42.386 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Fk5 00:12:42.386 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.386 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.644 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.644 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Fk5 00:12:42.644 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Fk5 00:12:42.644 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:42.644 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.TG4 00:12:42.644 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.644 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.904 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.904 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.TG4 00:12:42.904 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.TG4 00:12:43.162 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.VJA ]] 00:12:43.162 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VJA 00:12:43.162 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.162 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.162 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.162 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VJA 00:12:43.162 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VJA 00:12:43.421 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:43.421 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Gk0 00:12:43.421 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.421 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.421 19:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.421 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Gk0 00:12:43.421 19:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Gk0 00:12:43.679 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.dso ]] 00:12:43.679 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dso 00:12:43.679 19:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.679 19:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.679 19:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.679 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dso 00:12:43.679 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dso 00:12:43.938 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:43.938 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ONB 00:12:43.938 19:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.938 19:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.938 19:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.938 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ONB 00:12:43.938 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ONB 00:12:44.196 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:44.196 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:44.196 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.196 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.196 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:44.196 19:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.455 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.713 00:12:44.713 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.713 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.713 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.972 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.972 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.972 19:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.972 19:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.972 19:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.972 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.972 { 00:12:44.972 "auth": { 00:12:44.972 "dhgroup": "null", 00:12:44.972 "digest": "sha256", 00:12:44.972 "state": "completed" 00:12:44.972 }, 00:12:44.972 "cntlid": 1, 00:12:44.972 "listen_address": { 00:12:44.972 "adrfam": "IPv4", 00:12:44.972 "traddr": "10.0.0.2", 00:12:44.972 "trsvcid": "4420", 00:12:44.972 "trtype": "TCP" 00:12:44.972 }, 00:12:44.972 "peer_address": { 00:12:44.972 "adrfam": "IPv4", 00:12:44.972 "traddr": "10.0.0.1", 00:12:44.972 "trsvcid": "45014", 00:12:44.972 "trtype": "TCP" 00:12:44.972 }, 00:12:44.972 "qid": 0, 00:12:44.972 "state": "enabled", 00:12:44.972 "thread": "nvmf_tgt_poll_group_000" 00:12:44.972 } 00:12:44.972 ]' 00:12:44.972 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.231 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.231 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.231 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:45.231 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.231 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.231 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.231 19:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.508 19:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:50.767 19:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.767 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.767 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.025 { 00:12:51.025 "auth": { 00:12:51.025 "dhgroup": "null", 00:12:51.025 "digest": "sha256", 00:12:51.025 "state": "completed" 00:12:51.025 }, 00:12:51.025 "cntlid": 3, 00:12:51.025 "listen_address": { 00:12:51.025 "adrfam": "IPv4", 00:12:51.025 "traddr": "10.0.0.2", 00:12:51.025 "trsvcid": "4420", 00:12:51.025 "trtype": "TCP" 00:12:51.025 }, 00:12:51.025 "peer_address": { 00:12:51.025 "adrfam": "IPv4", 00:12:51.025 "traddr": "10.0.0.1", 00:12:51.025 "trsvcid": "45048", 00:12:51.025 "trtype": "TCP" 00:12:51.025 }, 00:12:51.025 "qid": 0, 00:12:51.025 "state": "enabled", 00:12:51.025 "thread": "nvmf_tgt_poll_group_000" 00:12:51.025 } 00:12:51.025 ]' 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:51.025 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.282 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.282 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.282 19:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.540 19:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:52.105 19:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.363 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.621 00:12:52.621 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.621 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.621 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.187 { 00:12:53.187 "auth": { 00:12:53.187 "dhgroup": "null", 00:12:53.187 "digest": "sha256", 00:12:53.187 "state": "completed" 00:12:53.187 }, 00:12:53.187 "cntlid": 5, 00:12:53.187 "listen_address": { 00:12:53.187 "adrfam": "IPv4", 00:12:53.187 "traddr": "10.0.0.2", 00:12:53.187 "trsvcid": "4420", 00:12:53.187 "trtype": "TCP" 00:12:53.187 }, 00:12:53.187 "peer_address": { 00:12:53.187 "adrfam": "IPv4", 00:12:53.187 "traddr": "10.0.0.1", 00:12:53.187 "trsvcid": "55996", 00:12:53.187 "trtype": "TCP" 00:12:53.187 }, 00:12:53.187 "qid": 0, 00:12:53.187 "state": "enabled", 00:12:53.187 "thread": "nvmf_tgt_poll_group_000" 00:12:53.187 } 00:12:53.187 ]' 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.187 19:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.446 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:54.011 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:54.270 19:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:54.527 00:12:54.527 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.527 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.527 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.094 { 00:12:55.094 "auth": { 00:12:55.094 "dhgroup": "null", 00:12:55.094 "digest": "sha256", 00:12:55.094 "state": "completed" 00:12:55.094 }, 00:12:55.094 "cntlid": 7, 00:12:55.094 "listen_address": { 00:12:55.094 "adrfam": "IPv4", 00:12:55.094 "traddr": "10.0.0.2", 00:12:55.094 "trsvcid": "4420", 00:12:55.094 "trtype": "TCP" 00:12:55.094 }, 00:12:55.094 "peer_address": { 00:12:55.094 "adrfam": "IPv4", 00:12:55.094 "traddr": "10.0.0.1", 00:12:55.094 "trsvcid": "56018", 00:12:55.094 "trtype": "TCP" 00:12:55.094 }, 00:12:55.094 "qid": 0, 00:12:55.094 "state": "enabled", 00:12:55.094 "thread": "nvmf_tgt_poll_group_000" 00:12:55.094 } 00:12:55.094 ]' 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.094 19:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.352 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:55.918 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.176 19:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.740 00:12:56.740 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.740 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.740 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.998 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.999 { 00:12:56.999 "auth": { 00:12:56.999 "dhgroup": "ffdhe2048", 00:12:56.999 "digest": "sha256", 00:12:56.999 "state": "completed" 00:12:56.999 }, 00:12:56.999 "cntlid": 9, 00:12:56.999 "listen_address": { 00:12:56.999 "adrfam": "IPv4", 00:12:56.999 "traddr": "10.0.0.2", 00:12:56.999 "trsvcid": "4420", 00:12:56.999 "trtype": "TCP" 00:12:56.999 }, 00:12:56.999 "peer_address": { 00:12:56.999 "adrfam": "IPv4", 00:12:56.999 "traddr": "10.0.0.1", 00:12:56.999 "trsvcid": "56042", 00:12:56.999 "trtype": "TCP" 00:12:56.999 }, 00:12:56.999 "qid": 0, 00:12:56.999 "state": "enabled", 00:12:56.999 "thread": "nvmf_tgt_poll_group_000" 00:12:56.999 } 00:12:56.999 ]' 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.999 19:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.565 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:58.131 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.389 19:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.647 00:12:58.647 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.647 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.647 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.905 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.906 { 00:12:58.906 "auth": { 00:12:58.906 "dhgroup": "ffdhe2048", 00:12:58.906 "digest": "sha256", 00:12:58.906 "state": "completed" 00:12:58.906 }, 00:12:58.906 "cntlid": 11, 00:12:58.906 "listen_address": { 00:12:58.906 "adrfam": "IPv4", 00:12:58.906 "traddr": "10.0.0.2", 00:12:58.906 "trsvcid": "4420", 00:12:58.906 "trtype": "TCP" 00:12:58.906 }, 00:12:58.906 "peer_address": { 00:12:58.906 "adrfam": "IPv4", 00:12:58.906 "traddr": "10.0.0.1", 00:12:58.906 "trsvcid": "56066", 00:12:58.906 "trtype": "TCP" 00:12:58.906 }, 00:12:58.906 "qid": 0, 00:12:58.906 "state": "enabled", 00:12:58.906 "thread": "nvmf_tgt_poll_group_000" 00:12:58.906 } 00:12:58.906 ]' 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.906 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.164 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:59.164 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.164 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.164 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.164 19:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.423 19:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:00.359 19:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.360 19:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:00.360 19:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.360 19:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.360 19:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.360 19:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.360 19:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:00.360 19:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.360 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.927 00:13:00.927 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.927 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.927 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.186 { 00:13:01.186 "auth": { 00:13:01.186 "dhgroup": "ffdhe2048", 00:13:01.186 "digest": "sha256", 00:13:01.186 "state": "completed" 00:13:01.186 }, 00:13:01.186 "cntlid": 13, 00:13:01.186 "listen_address": { 00:13:01.186 "adrfam": "IPv4", 00:13:01.186 "traddr": "10.0.0.2", 00:13:01.186 "trsvcid": "4420", 00:13:01.186 "trtype": "TCP" 00:13:01.186 }, 00:13:01.186 "peer_address": { 00:13:01.186 "adrfam": "IPv4", 00:13:01.186 "traddr": "10.0.0.1", 00:13:01.186 "trsvcid": "56104", 00:13:01.186 "trtype": "TCP" 00:13:01.186 }, 00:13:01.186 "qid": 0, 00:13:01.186 "state": "enabled", 00:13:01.186 "thread": "nvmf_tgt_poll_group_000" 00:13:01.186 } 00:13:01.186 ]' 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.186 19:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.445 19:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:02.381 19:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.381 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.640 00:13:02.640 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.640 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.640 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.899 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.899 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.899 19:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.899 19:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.899 19:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.899 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.899 { 00:13:02.899 "auth": { 00:13:02.899 "dhgroup": "ffdhe2048", 00:13:02.899 "digest": "sha256", 00:13:02.899 "state": "completed" 00:13:02.899 }, 00:13:02.899 "cntlid": 15, 00:13:02.899 "listen_address": { 00:13:02.899 "adrfam": "IPv4", 00:13:02.899 "traddr": "10.0.0.2", 00:13:02.899 "trsvcid": "4420", 00:13:02.899 "trtype": "TCP" 00:13:02.899 }, 00:13:02.899 "peer_address": { 00:13:02.899 "adrfam": "IPv4", 00:13:02.899 "traddr": "10.0.0.1", 00:13:02.899 "trsvcid": "36634", 00:13:02.899 "trtype": "TCP" 00:13:02.899 }, 00:13:02.899 "qid": 0, 00:13:02.899 "state": "enabled", 00:13:02.899 "thread": "nvmf_tgt_poll_group_000" 00:13:02.899 } 00:13:02.899 ]' 00:13:02.899 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.157 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.157 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.157 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:03.157 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.157 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.157 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.157 19:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.416 19:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:03.983 19:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:04.241 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:04.241 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.241 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:04.241 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:04.241 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:04.241 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.241 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.242 19:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.242 19:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.242 19:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.242 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.242 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.808 00:13:04.808 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.808 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.808 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.067 { 00:13:05.067 "auth": { 00:13:05.067 "dhgroup": "ffdhe3072", 00:13:05.067 "digest": "sha256", 00:13:05.067 "state": "completed" 00:13:05.067 }, 00:13:05.067 "cntlid": 17, 00:13:05.067 "listen_address": { 00:13:05.067 "adrfam": "IPv4", 00:13:05.067 "traddr": "10.0.0.2", 00:13:05.067 "trsvcid": "4420", 00:13:05.067 "trtype": "TCP" 00:13:05.067 }, 00:13:05.067 "peer_address": { 00:13:05.067 "adrfam": "IPv4", 00:13:05.067 "traddr": "10.0.0.1", 00:13:05.067 "trsvcid": "36658", 00:13:05.067 "trtype": "TCP" 00:13:05.067 }, 00:13:05.067 "qid": 0, 00:13:05.067 "state": "enabled", 00:13:05.067 "thread": "nvmf_tgt_poll_group_000" 00:13:05.067 } 00:13:05.067 ]' 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.067 19:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.325 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.322 19:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.322 19:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.322 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.322 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.578 00:13:06.578 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.578 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.578 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.834 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.834 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.834 19:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.834 19:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.091 { 00:13:07.091 "auth": { 00:13:07.091 "dhgroup": "ffdhe3072", 00:13:07.091 "digest": "sha256", 00:13:07.091 "state": "completed" 00:13:07.091 }, 00:13:07.091 "cntlid": 19, 00:13:07.091 "listen_address": { 00:13:07.091 "adrfam": "IPv4", 00:13:07.091 "traddr": "10.0.0.2", 00:13:07.091 "trsvcid": "4420", 00:13:07.091 "trtype": "TCP" 00:13:07.091 }, 00:13:07.091 "peer_address": { 00:13:07.091 "adrfam": "IPv4", 00:13:07.091 "traddr": "10.0.0.1", 00:13:07.091 "trsvcid": "36678", 00:13:07.091 "trtype": "TCP" 00:13:07.091 }, 00:13:07.091 "qid": 0, 00:13:07.091 "state": "enabled", 00:13:07.091 "thread": "nvmf_tgt_poll_group_000" 00:13:07.091 } 00:13:07.091 ]' 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.091 19:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.348 19:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:08.280 19:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.538 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.796 00:13:08.796 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.796 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.796 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.054 { 00:13:09.054 "auth": { 00:13:09.054 "dhgroup": "ffdhe3072", 00:13:09.054 "digest": "sha256", 00:13:09.054 "state": "completed" 00:13:09.054 }, 00:13:09.054 "cntlid": 21, 00:13:09.054 "listen_address": { 00:13:09.054 "adrfam": "IPv4", 00:13:09.054 "traddr": "10.0.0.2", 00:13:09.054 "trsvcid": "4420", 00:13:09.054 "trtype": "TCP" 00:13:09.054 }, 00:13:09.054 "peer_address": { 00:13:09.054 "adrfam": "IPv4", 00:13:09.054 "traddr": "10.0.0.1", 00:13:09.054 "trsvcid": "36708", 00:13:09.054 "trtype": "TCP" 00:13:09.054 }, 00:13:09.054 "qid": 0, 00:13:09.054 "state": "enabled", 00:13:09.054 "thread": "nvmf_tgt_poll_group_000" 00:13:09.054 } 00:13:09.054 ]' 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:09.054 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.311 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.311 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.311 19:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.567 19:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:10.145 19:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.406 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.993 00:13:10.993 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.993 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.993 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.251 { 00:13:11.251 "auth": { 00:13:11.251 "dhgroup": "ffdhe3072", 00:13:11.251 "digest": "sha256", 00:13:11.251 "state": "completed" 00:13:11.251 }, 00:13:11.251 "cntlid": 23, 00:13:11.251 "listen_address": { 00:13:11.251 "adrfam": "IPv4", 00:13:11.251 "traddr": "10.0.0.2", 00:13:11.251 "trsvcid": "4420", 00:13:11.251 "trtype": "TCP" 00:13:11.251 }, 00:13:11.251 "peer_address": { 00:13:11.251 "adrfam": "IPv4", 00:13:11.251 "traddr": "10.0.0.1", 00:13:11.251 "trsvcid": "36734", 00:13:11.251 "trtype": "TCP" 00:13:11.251 }, 00:13:11.251 "qid": 0, 00:13:11.251 "state": "enabled", 00:13:11.251 "thread": "nvmf_tgt_poll_group_000" 00:13:11.251 } 00:13:11.251 ]' 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:11.251 19:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.251 19:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.251 19:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.251 19:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.818 19:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:12.386 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.675 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.676 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.676 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.934 00:13:12.934 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.934 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.934 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.192 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.192 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.192 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.192 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.449 19:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.449 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.449 { 00:13:13.449 "auth": { 00:13:13.449 "dhgroup": "ffdhe4096", 00:13:13.449 "digest": "sha256", 00:13:13.449 "state": "completed" 00:13:13.449 }, 00:13:13.449 "cntlid": 25, 00:13:13.449 "listen_address": { 00:13:13.449 "adrfam": "IPv4", 00:13:13.449 "traddr": "10.0.0.2", 00:13:13.449 "trsvcid": "4420", 00:13:13.449 "trtype": "TCP" 00:13:13.449 }, 00:13:13.449 "peer_address": { 00:13:13.449 "adrfam": "IPv4", 00:13:13.449 "traddr": "10.0.0.1", 00:13:13.449 "trsvcid": "39788", 00:13:13.449 "trtype": "TCP" 00:13:13.449 }, 00:13:13.449 "qid": 0, 00:13:13.449 "state": "enabled", 00:13:13.449 "thread": "nvmf_tgt_poll_group_000" 00:13:13.449 } 00:13:13.449 ]' 00:13:13.449 19:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.449 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.449 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.449 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:13.449 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.449 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.449 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.449 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.706 19:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:14.273 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.840 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.099 00:13:15.099 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.099 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.099 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.357 { 00:13:15.357 "auth": { 00:13:15.357 "dhgroup": "ffdhe4096", 00:13:15.357 "digest": "sha256", 00:13:15.357 "state": "completed" 00:13:15.357 }, 00:13:15.357 "cntlid": 27, 00:13:15.357 "listen_address": { 00:13:15.357 "adrfam": "IPv4", 00:13:15.357 "traddr": "10.0.0.2", 00:13:15.357 "trsvcid": "4420", 00:13:15.357 "trtype": "TCP" 00:13:15.357 }, 00:13:15.357 "peer_address": { 00:13:15.357 "adrfam": "IPv4", 00:13:15.357 "traddr": "10.0.0.1", 00:13:15.357 "trsvcid": "39820", 00:13:15.357 "trtype": "TCP" 00:13:15.357 }, 00:13:15.357 "qid": 0, 00:13:15.357 "state": "enabled", 00:13:15.357 "thread": "nvmf_tgt_poll_group_000" 00:13:15.357 } 00:13:15.357 ]' 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.357 19:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.357 19:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:15.357 19:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.357 19:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.357 19:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.357 19:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.924 19:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:16.491 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:16.749 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.750 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.008 00:13:17.267 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.267 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.267 19:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.526 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.526 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.526 19:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.527 { 00:13:17.527 "auth": { 00:13:17.527 "dhgroup": "ffdhe4096", 00:13:17.527 "digest": "sha256", 00:13:17.527 "state": "completed" 00:13:17.527 }, 00:13:17.527 "cntlid": 29, 00:13:17.527 "listen_address": { 00:13:17.527 "adrfam": "IPv4", 00:13:17.527 "traddr": "10.0.0.2", 00:13:17.527 "trsvcid": "4420", 00:13:17.527 "trtype": "TCP" 00:13:17.527 }, 00:13:17.527 "peer_address": { 00:13:17.527 "adrfam": "IPv4", 00:13:17.527 "traddr": "10.0.0.1", 00:13:17.527 "trsvcid": "39846", 00:13:17.527 "trtype": "TCP" 00:13:17.527 }, 00:13:17.527 "qid": 0, 00:13:17.527 "state": "enabled", 00:13:17.527 "thread": "nvmf_tgt_poll_group_000" 00:13:17.527 } 00:13:17.527 ]' 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.527 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.785 19:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:18.724 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:18.983 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.241 00:13:19.241 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.241 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.241 19:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.498 { 00:13:19.498 "auth": { 00:13:19.498 "dhgroup": "ffdhe4096", 00:13:19.498 "digest": "sha256", 00:13:19.498 "state": "completed" 00:13:19.498 }, 00:13:19.498 "cntlid": 31, 00:13:19.498 "listen_address": { 00:13:19.498 "adrfam": "IPv4", 00:13:19.498 "traddr": "10.0.0.2", 00:13:19.498 "trsvcid": "4420", 00:13:19.498 "trtype": "TCP" 00:13:19.498 }, 00:13:19.498 "peer_address": { 00:13:19.498 "adrfam": "IPv4", 00:13:19.498 "traddr": "10.0.0.1", 00:13:19.498 "trsvcid": "39886", 00:13:19.498 "trtype": "TCP" 00:13:19.498 }, 00:13:19.498 "qid": 0, 00:13:19.498 "state": "enabled", 00:13:19.498 "thread": "nvmf_tgt_poll_group_000" 00:13:19.498 } 00:13:19.498 ]' 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.498 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.757 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.757 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.757 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.757 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.757 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.015 19:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.950 19:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.517 00:13:21.517 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.517 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.517 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.776 { 00:13:21.776 "auth": { 00:13:21.776 "dhgroup": "ffdhe6144", 00:13:21.776 "digest": "sha256", 00:13:21.776 "state": "completed" 00:13:21.776 }, 00:13:21.776 "cntlid": 33, 00:13:21.776 "listen_address": { 00:13:21.776 "adrfam": "IPv4", 00:13:21.776 "traddr": "10.0.0.2", 00:13:21.776 "trsvcid": "4420", 00:13:21.776 "trtype": "TCP" 00:13:21.776 }, 00:13:21.776 "peer_address": { 00:13:21.776 "adrfam": "IPv4", 00:13:21.776 "traddr": "10.0.0.1", 00:13:21.776 "trsvcid": "39926", 00:13:21.776 "trtype": "TCP" 00:13:21.776 }, 00:13:21.776 "qid": 0, 00:13:21.776 "state": "enabled", 00:13:21.776 "thread": "nvmf_tgt_poll_group_000" 00:13:21.776 } 00:13:21.776 ]' 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.776 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.035 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:22.035 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.035 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.035 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.035 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.294 19:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:22.857 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.115 19:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.680 00:13:23.680 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.680 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.680 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.938 { 00:13:23.938 "auth": { 00:13:23.938 "dhgroup": "ffdhe6144", 00:13:23.938 "digest": "sha256", 00:13:23.938 "state": "completed" 00:13:23.938 }, 00:13:23.938 "cntlid": 35, 00:13:23.938 "listen_address": { 00:13:23.938 "adrfam": "IPv4", 00:13:23.938 "traddr": "10.0.0.2", 00:13:23.938 "trsvcid": "4420", 00:13:23.938 "trtype": "TCP" 00:13:23.938 }, 00:13:23.938 "peer_address": { 00:13:23.938 "adrfam": "IPv4", 00:13:23.938 "traddr": "10.0.0.1", 00:13:23.938 "trsvcid": "54602", 00:13:23.938 "trtype": "TCP" 00:13:23.938 }, 00:13:23.938 "qid": 0, 00:13:23.938 "state": "enabled", 00:13:23.938 "thread": "nvmf_tgt_poll_group_000" 00:13:23.938 } 00:13:23.938 ]' 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:23.938 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.196 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.196 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.196 19:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.454 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:25.020 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.279 19:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.843 00:13:25.844 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.844 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.844 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.101 { 00:13:26.101 "auth": { 00:13:26.101 "dhgroup": "ffdhe6144", 00:13:26.101 "digest": "sha256", 00:13:26.101 "state": "completed" 00:13:26.101 }, 00:13:26.101 "cntlid": 37, 00:13:26.101 "listen_address": { 00:13:26.101 "adrfam": "IPv4", 00:13:26.101 "traddr": "10.0.0.2", 00:13:26.101 "trsvcid": "4420", 00:13:26.101 "trtype": "TCP" 00:13:26.101 }, 00:13:26.101 "peer_address": { 00:13:26.101 "adrfam": "IPv4", 00:13:26.101 "traddr": "10.0.0.1", 00:13:26.101 "trsvcid": "54644", 00:13:26.101 "trtype": "TCP" 00:13:26.101 }, 00:13:26.101 "qid": 0, 00:13:26.101 "state": "enabled", 00:13:26.101 "thread": "nvmf_tgt_poll_group_000" 00:13:26.101 } 00:13:26.101 ]' 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.101 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.102 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:26.102 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.102 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.102 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.102 19:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.667 19:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:27.232 19:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.490 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.054 00:13:28.054 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.054 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.054 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.311 { 00:13:28.311 "auth": { 00:13:28.311 "dhgroup": "ffdhe6144", 00:13:28.311 "digest": "sha256", 00:13:28.311 "state": "completed" 00:13:28.311 }, 00:13:28.311 "cntlid": 39, 00:13:28.311 "listen_address": { 00:13:28.311 "adrfam": "IPv4", 00:13:28.311 "traddr": "10.0.0.2", 00:13:28.311 "trsvcid": "4420", 00:13:28.311 "trtype": "TCP" 00:13:28.311 }, 00:13:28.311 "peer_address": { 00:13:28.311 "adrfam": "IPv4", 00:13:28.311 "traddr": "10.0.0.1", 00:13:28.311 "trsvcid": "54686", 00:13:28.311 "trtype": "TCP" 00:13:28.311 }, 00:13:28.311 "qid": 0, 00:13:28.311 "state": "enabled", 00:13:28.311 "thread": "nvmf_tgt_poll_group_000" 00:13:28.311 } 00:13:28.311 ]' 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.311 19:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.311 19:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:28.311 19:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.311 19:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.311 19:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.311 19:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.874 19:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:29.439 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.697 19:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.628 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.628 { 00:13:30.628 "auth": { 00:13:30.628 "dhgroup": "ffdhe8192", 00:13:30.628 "digest": "sha256", 00:13:30.628 "state": "completed" 00:13:30.628 }, 00:13:30.628 "cntlid": 41, 00:13:30.628 "listen_address": { 00:13:30.628 "adrfam": "IPv4", 00:13:30.628 "traddr": "10.0.0.2", 00:13:30.628 "trsvcid": "4420", 00:13:30.628 "trtype": "TCP" 00:13:30.628 }, 00:13:30.628 "peer_address": { 00:13:30.628 "adrfam": "IPv4", 00:13:30.628 "traddr": "10.0.0.1", 00:13:30.628 "trsvcid": "54698", 00:13:30.628 "trtype": "TCP" 00:13:30.628 }, 00:13:30.628 "qid": 0, 00:13:30.628 "state": "enabled", 00:13:30.628 "thread": "nvmf_tgt_poll_group_000" 00:13:30.628 } 00:13:30.628 ]' 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.628 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.885 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:30.885 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.885 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.885 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.885 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.143 19:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:13:31.709 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.709 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:31.709 19:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.709 19:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.968 19:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.968 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.968 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:31.968 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.226 19:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.792 00:13:32.792 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.792 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.792 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.051 { 00:13:33.051 "auth": { 00:13:33.051 "dhgroup": "ffdhe8192", 00:13:33.051 "digest": "sha256", 00:13:33.051 "state": "completed" 00:13:33.051 }, 00:13:33.051 "cntlid": 43, 00:13:33.051 "listen_address": { 00:13:33.051 "adrfam": "IPv4", 00:13:33.051 "traddr": "10.0.0.2", 00:13:33.051 "trsvcid": "4420", 00:13:33.051 "trtype": "TCP" 00:13:33.051 }, 00:13:33.051 "peer_address": { 00:13:33.051 "adrfam": "IPv4", 00:13:33.051 "traddr": "10.0.0.1", 00:13:33.051 "trsvcid": "33158", 00:13:33.051 "trtype": "TCP" 00:13:33.051 }, 00:13:33.051 "qid": 0, 00:13:33.051 "state": "enabled", 00:13:33.051 "thread": "nvmf_tgt_poll_group_000" 00:13:33.051 } 00:13:33.051 ]' 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:33.051 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.309 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.309 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.309 19:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.309 19:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:34.242 19:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.242 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.176 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.176 { 00:13:35.176 "auth": { 00:13:35.176 "dhgroup": "ffdhe8192", 00:13:35.176 "digest": "sha256", 00:13:35.176 "state": "completed" 00:13:35.176 }, 00:13:35.176 "cntlid": 45, 00:13:35.176 "listen_address": { 00:13:35.176 "adrfam": "IPv4", 00:13:35.176 "traddr": "10.0.0.2", 00:13:35.176 "trsvcid": "4420", 00:13:35.176 "trtype": "TCP" 00:13:35.176 }, 00:13:35.176 "peer_address": { 00:13:35.176 "adrfam": "IPv4", 00:13:35.176 "traddr": "10.0.0.1", 00:13:35.176 "trsvcid": "33172", 00:13:35.176 "trtype": "TCP" 00:13:35.176 }, 00:13:35.176 "qid": 0, 00:13:35.176 "state": "enabled", 00:13:35.176 "thread": "nvmf_tgt_poll_group_000" 00:13:35.176 } 00:13:35.176 ]' 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.176 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.434 19:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.434 19:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:35.434 19:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:35.434 19:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.434 19:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.434 19:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.692 19:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:13:36.257 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.514 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:36.514 19:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.514 19:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.514 19:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.514 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.514 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:36.514 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.772 19:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.340 00:13:37.340 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.340 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.340 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.609 { 00:13:37.609 "auth": { 00:13:37.609 "dhgroup": "ffdhe8192", 00:13:37.609 "digest": "sha256", 00:13:37.609 "state": "completed" 00:13:37.609 }, 00:13:37.609 "cntlid": 47, 00:13:37.609 "listen_address": { 00:13:37.609 "adrfam": "IPv4", 00:13:37.609 "traddr": "10.0.0.2", 00:13:37.609 "trsvcid": "4420", 00:13:37.609 "trtype": "TCP" 00:13:37.609 }, 00:13:37.609 "peer_address": { 00:13:37.609 "adrfam": "IPv4", 00:13:37.609 "traddr": "10.0.0.1", 00:13:37.609 "trsvcid": "33200", 00:13:37.609 "trtype": "TCP" 00:13:37.609 }, 00:13:37.609 "qid": 0, 00:13:37.609 "state": "enabled", 00:13:37.609 "thread": "nvmf_tgt_poll_group_000" 00:13:37.609 } 00:13:37.609 ]' 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.609 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.901 19:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.836 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.106 00:13:39.106 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.106 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.106 19:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.372 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.372 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.372 19:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.372 19:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.630 { 00:13:39.630 "auth": { 00:13:39.630 "dhgroup": "null", 00:13:39.630 "digest": "sha384", 00:13:39.630 "state": "completed" 00:13:39.630 }, 00:13:39.630 "cntlid": 49, 00:13:39.630 "listen_address": { 00:13:39.630 "adrfam": "IPv4", 00:13:39.630 "traddr": "10.0.0.2", 00:13:39.630 "trsvcid": "4420", 00:13:39.630 "trtype": "TCP" 00:13:39.630 }, 00:13:39.630 "peer_address": { 00:13:39.630 "adrfam": "IPv4", 00:13:39.630 "traddr": "10.0.0.1", 00:13:39.630 "trsvcid": "33242", 00:13:39.630 "trtype": "TCP" 00:13:39.630 }, 00:13:39.630 "qid": 0, 00:13:39.630 "state": "enabled", 00:13:39.630 "thread": "nvmf_tgt_poll_group_000" 00:13:39.630 } 00:13:39.630 ]' 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.630 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.888 19:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:40.455 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.714 19:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.972 19:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.972 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.972 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.231 00:13:41.231 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.231 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.231 19:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.490 { 00:13:41.490 "auth": { 00:13:41.490 "dhgroup": "null", 00:13:41.490 "digest": "sha384", 00:13:41.490 "state": "completed" 00:13:41.490 }, 00:13:41.490 "cntlid": 51, 00:13:41.490 "listen_address": { 00:13:41.490 "adrfam": "IPv4", 00:13:41.490 "traddr": "10.0.0.2", 00:13:41.490 "trsvcid": "4420", 00:13:41.490 "trtype": "TCP" 00:13:41.490 }, 00:13:41.490 "peer_address": { 00:13:41.490 "adrfam": "IPv4", 00:13:41.490 "traddr": "10.0.0.1", 00:13:41.490 "trsvcid": "33266", 00:13:41.490 "trtype": "TCP" 00:13:41.490 }, 00:13:41.490 "qid": 0, 00:13:41.490 "state": "enabled", 00:13:41.490 "thread": "nvmf_tgt_poll_group_000" 00:13:41.490 } 00:13:41.490 ]' 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:41.490 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.748 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.748 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.748 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.023 19:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:42.615 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.873 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.131 00:13:43.131 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.131 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.131 19:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.388 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.388 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.388 19:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.388 19:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.388 19:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.388 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.388 { 00:13:43.388 "auth": { 00:13:43.388 "dhgroup": "null", 00:13:43.388 "digest": "sha384", 00:13:43.388 "state": "completed" 00:13:43.388 }, 00:13:43.388 "cntlid": 53, 00:13:43.388 "listen_address": { 00:13:43.389 "adrfam": "IPv4", 00:13:43.389 "traddr": "10.0.0.2", 00:13:43.389 "trsvcid": "4420", 00:13:43.389 "trtype": "TCP" 00:13:43.389 }, 00:13:43.389 "peer_address": { 00:13:43.389 "adrfam": "IPv4", 00:13:43.389 "traddr": "10.0.0.1", 00:13:43.389 "trsvcid": "35202", 00:13:43.389 "trtype": "TCP" 00:13:43.389 }, 00:13:43.389 "qid": 0, 00:13:43.389 "state": "enabled", 00:13:43.389 "thread": "nvmf_tgt_poll_group_000" 00:13:43.389 } 00:13:43.389 ]' 00:13:43.389 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.646 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.646 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.646 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:43.646 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.646 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.646 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.646 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.904 19:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.840 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.408 00:13:45.408 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.408 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.408 19:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.666 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.666 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.666 19:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.666 19:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.666 19:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.666 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.666 { 00:13:45.666 "auth": { 00:13:45.666 "dhgroup": "null", 00:13:45.666 "digest": "sha384", 00:13:45.666 "state": "completed" 00:13:45.666 }, 00:13:45.666 "cntlid": 55, 00:13:45.666 "listen_address": { 00:13:45.667 "adrfam": "IPv4", 00:13:45.667 "traddr": "10.0.0.2", 00:13:45.667 "trsvcid": "4420", 00:13:45.667 "trtype": "TCP" 00:13:45.667 }, 00:13:45.667 "peer_address": { 00:13:45.667 "adrfam": "IPv4", 00:13:45.667 "traddr": "10.0.0.1", 00:13:45.667 "trsvcid": "35224", 00:13:45.667 "trtype": "TCP" 00:13:45.667 }, 00:13:45.667 "qid": 0, 00:13:45.667 "state": "enabled", 00:13:45.667 "thread": "nvmf_tgt_poll_group_000" 00:13:45.667 } 00:13:45.667 ]' 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.667 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.960 19:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.895 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.153 00:13:47.153 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.153 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.153 19:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.727 { 00:13:47.727 "auth": { 00:13:47.727 "dhgroup": "ffdhe2048", 00:13:47.727 "digest": "sha384", 00:13:47.727 "state": "completed" 00:13:47.727 }, 00:13:47.727 "cntlid": 57, 00:13:47.727 "listen_address": { 00:13:47.727 "adrfam": "IPv4", 00:13:47.727 "traddr": "10.0.0.2", 00:13:47.727 "trsvcid": "4420", 00:13:47.727 "trtype": "TCP" 00:13:47.727 }, 00:13:47.727 "peer_address": { 00:13:47.727 "adrfam": "IPv4", 00:13:47.727 "traddr": "10.0.0.1", 00:13:47.727 "trsvcid": "35256", 00:13:47.727 "trtype": "TCP" 00:13:47.727 }, 00:13:47.727 "qid": 0, 00:13:47.727 "state": "enabled", 00:13:47.727 "thread": "nvmf_tgt_poll_group_000" 00:13:47.727 } 00:13:47.727 ]' 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.727 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.988 19:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:48.554 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.812 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.377 00:13:49.377 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.377 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.377 19:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.634 { 00:13:49.634 "auth": { 00:13:49.634 "dhgroup": "ffdhe2048", 00:13:49.634 "digest": "sha384", 00:13:49.634 "state": "completed" 00:13:49.634 }, 00:13:49.634 "cntlid": 59, 00:13:49.634 "listen_address": { 00:13:49.634 "adrfam": "IPv4", 00:13:49.634 "traddr": "10.0.0.2", 00:13:49.634 "trsvcid": "4420", 00:13:49.634 "trtype": "TCP" 00:13:49.634 }, 00:13:49.634 "peer_address": { 00:13:49.634 "adrfam": "IPv4", 00:13:49.634 "traddr": "10.0.0.1", 00:13:49.634 "trsvcid": "35280", 00:13:49.634 "trtype": "TCP" 00:13:49.634 }, 00:13:49.634 "qid": 0, 00:13:49.634 "state": "enabled", 00:13:49.634 "thread": "nvmf_tgt_poll_group_000" 00:13:49.634 } 00:13:49.634 ]' 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:49.634 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.891 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.891 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.891 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.148 19:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:50.713 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.971 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.229 00:13:51.229 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.229 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.229 19:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.795 { 00:13:51.795 "auth": { 00:13:51.795 "dhgroup": "ffdhe2048", 00:13:51.795 "digest": "sha384", 00:13:51.795 "state": "completed" 00:13:51.795 }, 00:13:51.795 "cntlid": 61, 00:13:51.795 "listen_address": { 00:13:51.795 "adrfam": "IPv4", 00:13:51.795 "traddr": "10.0.0.2", 00:13:51.795 "trsvcid": "4420", 00:13:51.795 "trtype": "TCP" 00:13:51.795 }, 00:13:51.795 "peer_address": { 00:13:51.795 "adrfam": "IPv4", 00:13:51.795 "traddr": "10.0.0.1", 00:13:51.795 "trsvcid": "35316", 00:13:51.795 "trtype": "TCP" 00:13:51.795 }, 00:13:51.795 "qid": 0, 00:13:51.795 "state": "enabled", 00:13:51.795 "thread": "nvmf_tgt_poll_group_000" 00:13:51.795 } 00:13:51.795 ]' 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.795 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.053 19:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:13:52.987 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.988 19:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:53.555 00:13:53.555 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.555 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.555 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.814 { 00:13:53.814 "auth": { 00:13:53.814 "dhgroup": "ffdhe2048", 00:13:53.814 "digest": "sha384", 00:13:53.814 "state": "completed" 00:13:53.814 }, 00:13:53.814 "cntlid": 63, 00:13:53.814 "listen_address": { 00:13:53.814 "adrfam": "IPv4", 00:13:53.814 "traddr": "10.0.0.2", 00:13:53.814 "trsvcid": "4420", 00:13:53.814 "trtype": "TCP" 00:13:53.814 }, 00:13:53.814 "peer_address": { 00:13:53.814 "adrfam": "IPv4", 00:13:53.814 "traddr": "10.0.0.1", 00:13:53.814 "trsvcid": "57022", 00:13:53.814 "trtype": "TCP" 00:13:53.814 }, 00:13:53.814 "qid": 0, 00:13:53.814 "state": "enabled", 00:13:53.814 "thread": "nvmf_tgt_poll_group_000" 00:13:53.814 } 00:13:53.814 ]' 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.814 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.073 19:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.040 19:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.605 00:13:55.605 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.605 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.605 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.862 { 00:13:55.862 "auth": { 00:13:55.862 "dhgroup": "ffdhe3072", 00:13:55.862 "digest": "sha384", 00:13:55.862 "state": "completed" 00:13:55.862 }, 00:13:55.862 "cntlid": 65, 00:13:55.862 "listen_address": { 00:13:55.862 "adrfam": "IPv4", 00:13:55.862 "traddr": "10.0.0.2", 00:13:55.862 "trsvcid": "4420", 00:13:55.862 "trtype": "TCP" 00:13:55.862 }, 00:13:55.862 "peer_address": { 00:13:55.862 "adrfam": "IPv4", 00:13:55.862 "traddr": "10.0.0.1", 00:13:55.862 "trsvcid": "57052", 00:13:55.862 "trtype": "TCP" 00:13:55.862 }, 00:13:55.862 "qid": 0, 00:13:55.862 "state": "enabled", 00:13:55.862 "thread": "nvmf_tgt_poll_group_000" 00:13:55.862 } 00:13:55.862 ]' 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.862 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.863 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.120 19:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.054 19:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.620 00:13:57.620 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.620 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.620 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.879 { 00:13:57.879 "auth": { 00:13:57.879 "dhgroup": "ffdhe3072", 00:13:57.879 "digest": "sha384", 00:13:57.879 "state": "completed" 00:13:57.879 }, 00:13:57.879 "cntlid": 67, 00:13:57.879 "listen_address": { 00:13:57.879 "adrfam": "IPv4", 00:13:57.879 "traddr": "10.0.0.2", 00:13:57.879 "trsvcid": "4420", 00:13:57.879 "trtype": "TCP" 00:13:57.879 }, 00:13:57.879 "peer_address": { 00:13:57.879 "adrfam": "IPv4", 00:13:57.879 "traddr": "10.0.0.1", 00:13:57.879 "trsvcid": "57082", 00:13:57.879 "trtype": "TCP" 00:13:57.879 }, 00:13:57.879 "qid": 0, 00:13:57.879 "state": "enabled", 00:13:57.879 "thread": "nvmf_tgt_poll_group_000" 00:13:57.879 } 00:13:57.879 ]' 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.879 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.137 19:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:59.071 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.328 19:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.586 00:13:59.586 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.586 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.586 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.843 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.844 { 00:13:59.844 "auth": { 00:13:59.844 "dhgroup": "ffdhe3072", 00:13:59.844 "digest": "sha384", 00:13:59.844 "state": "completed" 00:13:59.844 }, 00:13:59.844 "cntlid": 69, 00:13:59.844 "listen_address": { 00:13:59.844 "adrfam": "IPv4", 00:13:59.844 "traddr": "10.0.0.2", 00:13:59.844 "trsvcid": "4420", 00:13:59.844 "trtype": "TCP" 00:13:59.844 }, 00:13:59.844 "peer_address": { 00:13:59.844 "adrfam": "IPv4", 00:13:59.844 "traddr": "10.0.0.1", 00:13:59.844 "trsvcid": "57116", 00:13:59.844 "trtype": "TCP" 00:13:59.844 }, 00:13:59.844 "qid": 0, 00:13:59.844 "state": "enabled", 00:13:59.844 "thread": "nvmf_tgt_poll_group_000" 00:13:59.844 } 00:13:59.844 ]' 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.844 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.101 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:00.101 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.101 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.101 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.101 19:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.359 19:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:01.293 19:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.293 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.860 00:14:01.860 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.860 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.860 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.118 { 00:14:02.118 "auth": { 00:14:02.118 "dhgroup": "ffdhe3072", 00:14:02.118 "digest": "sha384", 00:14:02.118 "state": "completed" 00:14:02.118 }, 00:14:02.118 "cntlid": 71, 00:14:02.118 "listen_address": { 00:14:02.118 "adrfam": "IPv4", 00:14:02.118 "traddr": "10.0.0.2", 00:14:02.118 "trsvcid": "4420", 00:14:02.118 "trtype": "TCP" 00:14:02.118 }, 00:14:02.118 "peer_address": { 00:14:02.118 "adrfam": "IPv4", 00:14:02.118 "traddr": "10.0.0.1", 00:14:02.118 "trsvcid": "55134", 00:14:02.118 "trtype": "TCP" 00:14:02.118 }, 00:14:02.118 "qid": 0, 00:14:02.118 "state": "enabled", 00:14:02.118 "thread": "nvmf_tgt_poll_group_000" 00:14:02.118 } 00:14:02.118 ]' 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.118 19:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.376 19:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.315 19:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.315 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.883 00:14:03.883 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.883 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.883 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.140 { 00:14:04.140 "auth": { 00:14:04.140 "dhgroup": "ffdhe4096", 00:14:04.140 "digest": "sha384", 00:14:04.140 "state": "completed" 00:14:04.140 }, 00:14:04.140 "cntlid": 73, 00:14:04.140 "listen_address": { 00:14:04.140 "adrfam": "IPv4", 00:14:04.140 "traddr": "10.0.0.2", 00:14:04.140 "trsvcid": "4420", 00:14:04.140 "trtype": "TCP" 00:14:04.140 }, 00:14:04.140 "peer_address": { 00:14:04.140 "adrfam": "IPv4", 00:14:04.140 "traddr": "10.0.0.1", 00:14:04.140 "trsvcid": "55168", 00:14:04.140 "trtype": "TCP" 00:14:04.140 }, 00:14:04.140 "qid": 0, 00:14:04.140 "state": "enabled", 00:14:04.140 "thread": "nvmf_tgt_poll_group_000" 00:14:04.140 } 00:14:04.140 ]' 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.140 19:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.398 19:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.330 19:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.330 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.895 00:14:05.895 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.895 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.895 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.152 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.152 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.152 19:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.152 19:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.152 19:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.153 { 00:14:06.153 "auth": { 00:14:06.153 "dhgroup": "ffdhe4096", 00:14:06.153 "digest": "sha384", 00:14:06.153 "state": "completed" 00:14:06.153 }, 00:14:06.153 "cntlid": 75, 00:14:06.153 "listen_address": { 00:14:06.153 "adrfam": "IPv4", 00:14:06.153 "traddr": "10.0.0.2", 00:14:06.153 "trsvcid": "4420", 00:14:06.153 "trtype": "TCP" 00:14:06.153 }, 00:14:06.153 "peer_address": { 00:14:06.153 "adrfam": "IPv4", 00:14:06.153 "traddr": "10.0.0.1", 00:14:06.153 "trsvcid": "55206", 00:14:06.153 "trtype": "TCP" 00:14:06.153 }, 00:14:06.153 "qid": 0, 00:14:06.153 "state": "enabled", 00:14:06.153 "thread": "nvmf_tgt_poll_group_000" 00:14:06.153 } 00:14:06.153 ]' 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.153 19:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.409 19:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:07.342 19:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.342 19:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.600 19:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.600 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.600 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.857 00:14:07.857 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.857 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.857 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.115 { 00:14:08.115 "auth": { 00:14:08.115 "dhgroup": "ffdhe4096", 00:14:08.115 "digest": "sha384", 00:14:08.115 "state": "completed" 00:14:08.115 }, 00:14:08.115 "cntlid": 77, 00:14:08.115 "listen_address": { 00:14:08.115 "adrfam": "IPv4", 00:14:08.115 "traddr": "10.0.0.2", 00:14:08.115 "trsvcid": "4420", 00:14:08.115 "trtype": "TCP" 00:14:08.115 }, 00:14:08.115 "peer_address": { 00:14:08.115 "adrfam": "IPv4", 00:14:08.115 "traddr": "10.0.0.1", 00:14:08.115 "trsvcid": "55226", 00:14:08.115 "trtype": "TCP" 00:14:08.115 }, 00:14:08.115 "qid": 0, 00:14:08.115 "state": "enabled", 00:14:08.115 "thread": "nvmf_tgt_poll_group_000" 00:14:08.115 } 00:14:08.115 ]' 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.115 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:08.374 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:08.374 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:08.374 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.374 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.374 19:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.632 19:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:09.257 19:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.257 19:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.515 19:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.515 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.515 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.773 00:14:09.773 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.773 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.773 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.031 { 00:14:10.031 "auth": { 00:14:10.031 "dhgroup": "ffdhe4096", 00:14:10.031 "digest": "sha384", 00:14:10.031 "state": "completed" 00:14:10.031 }, 00:14:10.031 "cntlid": 79, 00:14:10.031 "listen_address": { 00:14:10.031 "adrfam": "IPv4", 00:14:10.031 "traddr": "10.0.0.2", 00:14:10.031 "trsvcid": "4420", 00:14:10.031 "trtype": "TCP" 00:14:10.031 }, 00:14:10.031 "peer_address": { 00:14:10.031 "adrfam": "IPv4", 00:14:10.031 "traddr": "10.0.0.1", 00:14:10.031 "trsvcid": "55246", 00:14:10.031 "trtype": "TCP" 00:14:10.031 }, 00:14:10.031 "qid": 0, 00:14:10.031 "state": "enabled", 00:14:10.031 "thread": "nvmf_tgt_poll_group_000" 00:14:10.031 } 00:14:10.031 ]' 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:10.031 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.289 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.289 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.289 19:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.548 19:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:11.114 19:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:11.115 19:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.373 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.939 00:14:11.939 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.939 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.939 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.197 { 00:14:12.197 "auth": { 00:14:12.197 "dhgroup": "ffdhe6144", 00:14:12.197 "digest": "sha384", 00:14:12.197 "state": "completed" 00:14:12.197 }, 00:14:12.197 "cntlid": 81, 00:14:12.197 "listen_address": { 00:14:12.197 "adrfam": "IPv4", 00:14:12.197 "traddr": "10.0.0.2", 00:14:12.197 "trsvcid": "4420", 00:14:12.197 "trtype": "TCP" 00:14:12.197 }, 00:14:12.197 "peer_address": { 00:14:12.197 "adrfam": "IPv4", 00:14:12.197 "traddr": "10.0.0.1", 00:14:12.197 "trsvcid": "56796", 00:14:12.197 "trtype": "TCP" 00:14:12.197 }, 00:14:12.197 "qid": 0, 00:14:12.197 "state": "enabled", 00:14:12.197 "thread": "nvmf_tgt_poll_group_000" 00:14:12.197 } 00:14:12.197 ]' 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.197 19:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.762 19:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:14:13.326 19:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.326 19:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:13.327 19:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.327 19:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.327 19:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.327 19:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.327 19:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.327 19:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.584 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.149 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.149 { 00:14:14.149 "auth": { 00:14:14.149 "dhgroup": "ffdhe6144", 00:14:14.149 "digest": "sha384", 00:14:14.149 "state": "completed" 00:14:14.149 }, 00:14:14.149 "cntlid": 83, 00:14:14.149 "listen_address": { 00:14:14.149 "adrfam": "IPv4", 00:14:14.149 "traddr": "10.0.0.2", 00:14:14.149 "trsvcid": "4420", 00:14:14.149 "trtype": "TCP" 00:14:14.149 }, 00:14:14.149 "peer_address": { 00:14:14.149 "adrfam": "IPv4", 00:14:14.149 "traddr": "10.0.0.1", 00:14:14.149 "trsvcid": "56812", 00:14:14.149 "trtype": "TCP" 00:14:14.149 }, 00:14:14.149 "qid": 0, 00:14:14.149 "state": "enabled", 00:14:14.149 "thread": "nvmf_tgt_poll_group_000" 00:14:14.149 } 00:14:14.149 ]' 00:14:14.149 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.406 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.406 19:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.406 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:14.406 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.406 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.406 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.406 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.662 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:15.226 19:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:15.484 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:15.484 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.484 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:15.484 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:15.484 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.484 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.484 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.485 19:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.485 19:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.485 19:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.485 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.485 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.053 00:14:16.053 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.053 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.053 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.311 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.311 19:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.311 19:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.311 19:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.311 19:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.311 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.311 { 00:14:16.311 "auth": { 00:14:16.311 "dhgroup": "ffdhe6144", 00:14:16.311 "digest": "sha384", 00:14:16.311 "state": "completed" 00:14:16.311 }, 00:14:16.311 "cntlid": 85, 00:14:16.311 "listen_address": { 00:14:16.311 "adrfam": "IPv4", 00:14:16.311 "traddr": "10.0.0.2", 00:14:16.311 "trsvcid": "4420", 00:14:16.311 "trtype": "TCP" 00:14:16.311 }, 00:14:16.311 "peer_address": { 00:14:16.311 "adrfam": "IPv4", 00:14:16.311 "traddr": "10.0.0.1", 00:14:16.311 "trsvcid": "56838", 00:14:16.311 "trtype": "TCP" 00:14:16.311 }, 00:14:16.311 "qid": 0, 00:14:16.311 "state": "enabled", 00:14:16.311 "thread": "nvmf_tgt_poll_group_000" 00:14:16.311 } 00:14:16.311 ]' 00:14:16.311 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.311 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.311 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.569 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:16.569 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.569 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.569 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.569 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.827 19:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:17.394 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.651 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.217 00:14:18.217 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.217 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.217 19:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.475 { 00:14:18.475 "auth": { 00:14:18.475 "dhgroup": "ffdhe6144", 00:14:18.475 "digest": "sha384", 00:14:18.475 "state": "completed" 00:14:18.475 }, 00:14:18.475 "cntlid": 87, 00:14:18.475 "listen_address": { 00:14:18.475 "adrfam": "IPv4", 00:14:18.475 "traddr": "10.0.0.2", 00:14:18.475 "trsvcid": "4420", 00:14:18.475 "trtype": "TCP" 00:14:18.475 }, 00:14:18.475 "peer_address": { 00:14:18.475 "adrfam": "IPv4", 00:14:18.475 "traddr": "10.0.0.1", 00:14:18.475 "trsvcid": "56858", 00:14:18.475 "trtype": "TCP" 00:14:18.475 }, 00:14:18.475 "qid": 0, 00:14:18.475 "state": "enabled", 00:14:18.475 "thread": "nvmf_tgt_poll_group_000" 00:14:18.475 } 00:14:18.475 ]' 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:18.475 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.733 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.733 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.733 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.991 19:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:19.554 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.862 19:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.425 00:14:20.425 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.425 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.425 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.682 { 00:14:20.682 "auth": { 00:14:20.682 "dhgroup": "ffdhe8192", 00:14:20.682 "digest": "sha384", 00:14:20.682 "state": "completed" 00:14:20.682 }, 00:14:20.682 "cntlid": 89, 00:14:20.682 "listen_address": { 00:14:20.682 "adrfam": "IPv4", 00:14:20.682 "traddr": "10.0.0.2", 00:14:20.682 "trsvcid": "4420", 00:14:20.682 "trtype": "TCP" 00:14:20.682 }, 00:14:20.682 "peer_address": { 00:14:20.682 "adrfam": "IPv4", 00:14:20.682 "traddr": "10.0.0.1", 00:14:20.682 "trsvcid": "56870", 00:14:20.682 "trtype": "TCP" 00:14:20.682 }, 00:14:20.682 "qid": 0, 00:14:20.682 "state": "enabled", 00:14:20.682 "thread": "nvmf_tgt_poll_group_000" 00:14:20.682 } 00:14:20.682 ]' 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:20.682 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.940 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.940 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.940 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.198 19:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:14:21.762 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.762 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:21.762 19:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.762 19:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.762 19:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.762 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.762 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.763 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.020 19:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.584 00:14:22.584 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.584 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.584 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.842 { 00:14:22.842 "auth": { 00:14:22.842 "dhgroup": "ffdhe8192", 00:14:22.842 "digest": "sha384", 00:14:22.842 "state": "completed" 00:14:22.842 }, 00:14:22.842 "cntlid": 91, 00:14:22.842 "listen_address": { 00:14:22.842 "adrfam": "IPv4", 00:14:22.842 "traddr": "10.0.0.2", 00:14:22.842 "trsvcid": "4420", 00:14:22.842 "trtype": "TCP" 00:14:22.842 }, 00:14:22.842 "peer_address": { 00:14:22.842 "adrfam": "IPv4", 00:14:22.842 "traddr": "10.0.0.1", 00:14:22.842 "trsvcid": "39804", 00:14:22.842 "trtype": "TCP" 00:14:22.842 }, 00:14:22.842 "qid": 0, 00:14:22.842 "state": "enabled", 00:14:22.842 "thread": "nvmf_tgt_poll_group_000" 00:14:22.842 } 00:14:22.842 ]' 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.842 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.099 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:23.099 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.099 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.099 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.099 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.356 19:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.920 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.178 19:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.742 00:14:24.742 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.742 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.742 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.009 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.009 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.009 19:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.009 19:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.009 19:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.009 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.009 { 00:14:25.009 "auth": { 00:14:25.009 "dhgroup": "ffdhe8192", 00:14:25.009 "digest": "sha384", 00:14:25.009 "state": "completed" 00:14:25.009 }, 00:14:25.009 "cntlid": 93, 00:14:25.009 "listen_address": { 00:14:25.009 "adrfam": "IPv4", 00:14:25.009 "traddr": "10.0.0.2", 00:14:25.009 "trsvcid": "4420", 00:14:25.009 "trtype": "TCP" 00:14:25.009 }, 00:14:25.009 "peer_address": { 00:14:25.009 "adrfam": "IPv4", 00:14:25.009 "traddr": "10.0.0.1", 00:14:25.009 "trsvcid": "39832", 00:14:25.009 "trtype": "TCP" 00:14:25.009 }, 00:14:25.009 "qid": 0, 00:14:25.009 "state": "enabled", 00:14:25.009 "thread": "nvmf_tgt_poll_group_000" 00:14:25.009 } 00:14:25.009 ]' 00:14:25.009 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.267 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.267 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.267 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:25.267 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.267 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.267 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.267 19:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.525 19:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:26.091 19:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.656 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.222 00:14:27.222 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.222 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.222 19:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.480 { 00:14:27.480 "auth": { 00:14:27.480 "dhgroup": "ffdhe8192", 00:14:27.480 "digest": "sha384", 00:14:27.480 "state": "completed" 00:14:27.480 }, 00:14:27.480 "cntlid": 95, 00:14:27.480 "listen_address": { 00:14:27.480 "adrfam": "IPv4", 00:14:27.480 "traddr": "10.0.0.2", 00:14:27.480 "trsvcid": "4420", 00:14:27.480 "trtype": "TCP" 00:14:27.480 }, 00:14:27.480 "peer_address": { 00:14:27.480 "adrfam": "IPv4", 00:14:27.480 "traddr": "10.0.0.1", 00:14:27.480 "trsvcid": "39862", 00:14:27.480 "trtype": "TCP" 00:14:27.480 }, 00:14:27.480 "qid": 0, 00:14:27.480 "state": "enabled", 00:14:27.480 "thread": "nvmf_tgt_poll_group_000" 00:14:27.480 } 00:14:27.480 ]' 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.480 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.737 19:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.669 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.926 00:14:28.926 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.926 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.926 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.182 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.182 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.182 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.182 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.182 19:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.182 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.182 { 00:14:29.182 "auth": { 00:14:29.182 "dhgroup": "null", 00:14:29.182 "digest": "sha512", 00:14:29.182 "state": "completed" 00:14:29.182 }, 00:14:29.182 "cntlid": 97, 00:14:29.182 "listen_address": { 00:14:29.182 "adrfam": "IPv4", 00:14:29.182 "traddr": "10.0.0.2", 00:14:29.182 "trsvcid": "4420", 00:14:29.182 "trtype": "TCP" 00:14:29.182 }, 00:14:29.182 "peer_address": { 00:14:29.182 "adrfam": "IPv4", 00:14:29.182 "traddr": "10.0.0.1", 00:14:29.182 "trsvcid": "39874", 00:14:29.182 "trtype": "TCP" 00:14:29.182 }, 00:14:29.182 "qid": 0, 00:14:29.182 "state": "enabled", 00:14:29.182 "thread": "nvmf_tgt_poll_group_000" 00:14:29.182 } 00:14:29.182 ]' 00:14:29.182 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.439 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.439 19:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.439 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:29.439 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.439 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.439 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.439 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.697 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.259 19:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.543 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.802 00:14:30.802 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.802 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.802 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.060 { 00:14:31.060 "auth": { 00:14:31.060 "dhgroup": "null", 00:14:31.060 "digest": "sha512", 00:14:31.060 "state": "completed" 00:14:31.060 }, 00:14:31.060 "cntlid": 99, 00:14:31.060 "listen_address": { 00:14:31.060 "adrfam": "IPv4", 00:14:31.060 "traddr": "10.0.0.2", 00:14:31.060 "trsvcid": "4420", 00:14:31.060 "trtype": "TCP" 00:14:31.060 }, 00:14:31.060 "peer_address": { 00:14:31.060 "adrfam": "IPv4", 00:14:31.060 "traddr": "10.0.0.1", 00:14:31.060 "trsvcid": "39910", 00:14:31.060 "trtype": "TCP" 00:14:31.060 }, 00:14:31.060 "qid": 0, 00:14:31.060 "state": "enabled", 00:14:31.060 "thread": "nvmf_tgt_poll_group_000" 00:14:31.060 } 00:14:31.060 ]' 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:31.060 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.317 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.317 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.317 19:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.575 19:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:32.141 19:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.415 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.672 00:14:32.672 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.672 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.672 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.929 { 00:14:32.929 "auth": { 00:14:32.929 "dhgroup": "null", 00:14:32.929 "digest": "sha512", 00:14:32.929 "state": "completed" 00:14:32.929 }, 00:14:32.929 "cntlid": 101, 00:14:32.929 "listen_address": { 00:14:32.929 "adrfam": "IPv4", 00:14:32.929 "traddr": "10.0.0.2", 00:14:32.929 "trsvcid": "4420", 00:14:32.929 "trtype": "TCP" 00:14:32.929 }, 00:14:32.929 "peer_address": { 00:14:32.929 "adrfam": "IPv4", 00:14:32.929 "traddr": "10.0.0.1", 00:14:32.929 "trsvcid": "52786", 00:14:32.929 "trtype": "TCP" 00:14:32.929 }, 00:14:32.929 "qid": 0, 00:14:32.929 "state": "enabled", 00:14:32.929 "thread": "nvmf_tgt_poll_group_000" 00:14:32.929 } 00:14:32.929 ]' 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.929 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.187 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:33.187 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.187 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.187 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.187 19:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.445 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:34.011 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.268 19:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.833 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.833 { 00:14:34.833 "auth": { 00:14:34.833 "dhgroup": "null", 00:14:34.833 "digest": "sha512", 00:14:34.833 "state": "completed" 00:14:34.833 }, 00:14:34.833 "cntlid": 103, 00:14:34.833 "listen_address": { 00:14:34.833 "adrfam": "IPv4", 00:14:34.833 "traddr": "10.0.0.2", 00:14:34.833 "trsvcid": "4420", 00:14:34.833 "trtype": "TCP" 00:14:34.833 }, 00:14:34.833 "peer_address": { 00:14:34.833 "adrfam": "IPv4", 00:14:34.833 "traddr": "10.0.0.1", 00:14:34.833 "trsvcid": "52808", 00:14:34.833 "trtype": "TCP" 00:14:34.833 }, 00:14:34.833 "qid": 0, 00:14:34.833 "state": "enabled", 00:14:34.833 "thread": "nvmf_tgt_poll_group_000" 00:14:34.833 } 00:14:34.833 ]' 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.833 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.091 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:35.091 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.091 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.091 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.091 19:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.349 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.917 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.175 19:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.740 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.740 { 00:14:36.740 "auth": { 00:14:36.740 "dhgroup": "ffdhe2048", 00:14:36.740 "digest": "sha512", 00:14:36.740 "state": "completed" 00:14:36.740 }, 00:14:36.740 "cntlid": 105, 00:14:36.740 "listen_address": { 00:14:36.740 "adrfam": "IPv4", 00:14:36.740 "traddr": "10.0.0.2", 00:14:36.740 "trsvcid": "4420", 00:14:36.740 "trtype": "TCP" 00:14:36.740 }, 00:14:36.740 "peer_address": { 00:14:36.740 "adrfam": "IPv4", 00:14:36.740 "traddr": "10.0.0.1", 00:14:36.740 "trsvcid": "52820", 00:14:36.740 "trtype": "TCP" 00:14:36.740 }, 00:14:36.740 "qid": 0, 00:14:36.740 "state": "enabled", 00:14:36.740 "thread": "nvmf_tgt_poll_group_000" 00:14:36.740 } 00:14:36.740 ]' 00:14:36.740 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.997 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.997 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.997 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.997 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.997 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.997 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.997 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.255 19:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:38.188 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.189 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.189 19:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.189 19:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.189 19:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.189 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.189 19:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.754 00:14:38.754 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.754 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.754 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.012 { 00:14:39.012 "auth": { 00:14:39.012 "dhgroup": "ffdhe2048", 00:14:39.012 "digest": "sha512", 00:14:39.012 "state": "completed" 00:14:39.012 }, 00:14:39.012 "cntlid": 107, 00:14:39.012 "listen_address": { 00:14:39.012 "adrfam": "IPv4", 00:14:39.012 "traddr": "10.0.0.2", 00:14:39.012 "trsvcid": "4420", 00:14:39.012 "trtype": "TCP" 00:14:39.012 }, 00:14:39.012 "peer_address": { 00:14:39.012 "adrfam": "IPv4", 00:14:39.012 "traddr": "10.0.0.1", 00:14:39.012 "trsvcid": "52850", 00:14:39.012 "trtype": "TCP" 00:14:39.012 }, 00:14:39.012 "qid": 0, 00:14:39.012 "state": "enabled", 00:14:39.012 "thread": "nvmf_tgt_poll_group_000" 00:14:39.012 } 00:14:39.012 ]' 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.012 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.270 19:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.840 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.098 19:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.662 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.662 { 00:14:40.662 "auth": { 00:14:40.662 "dhgroup": "ffdhe2048", 00:14:40.662 "digest": "sha512", 00:14:40.662 "state": "completed" 00:14:40.662 }, 00:14:40.662 "cntlid": 109, 00:14:40.662 "listen_address": { 00:14:40.662 "adrfam": "IPv4", 00:14:40.662 "traddr": "10.0.0.2", 00:14:40.662 "trsvcid": "4420", 00:14:40.662 "trtype": "TCP" 00:14:40.662 }, 00:14:40.662 "peer_address": { 00:14:40.662 "adrfam": "IPv4", 00:14:40.662 "traddr": "10.0.0.1", 00:14:40.662 "trsvcid": "52882", 00:14:40.662 "trtype": "TCP" 00:14:40.662 }, 00:14:40.662 "qid": 0, 00:14:40.662 "state": "enabled", 00:14:40.662 "thread": "nvmf_tgt_poll_group_000" 00:14:40.662 } 00:14:40.662 ]' 00:14:40.662 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.918 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.918 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.919 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.919 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.919 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.919 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.919 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.175 19:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.104 19:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.105 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.105 19:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.668 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.668 { 00:14:42.668 "auth": { 00:14:42.668 "dhgroup": "ffdhe2048", 00:14:42.668 "digest": "sha512", 00:14:42.668 "state": "completed" 00:14:42.668 }, 00:14:42.668 "cntlid": 111, 00:14:42.668 "listen_address": { 00:14:42.668 "adrfam": "IPv4", 00:14:42.668 "traddr": "10.0.0.2", 00:14:42.668 "trsvcid": "4420", 00:14:42.668 "trtype": "TCP" 00:14:42.668 }, 00:14:42.668 "peer_address": { 00:14:42.668 "adrfam": "IPv4", 00:14:42.668 "traddr": "10.0.0.1", 00:14:42.668 "trsvcid": "44452", 00:14:42.668 "trtype": "TCP" 00:14:42.668 }, 00:14:42.668 "qid": 0, 00:14:42.668 "state": "enabled", 00:14:42.668 "thread": "nvmf_tgt_poll_group_000" 00:14:42.668 } 00:14:42.668 ]' 00:14:42.668 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.924 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.924 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.924 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.924 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.924 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.924 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.924 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.180 19:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.742 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.000 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.258 00:14:44.258 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.258 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.258 19:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.515 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.515 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.515 19:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.515 19:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.515 19:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.515 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.515 { 00:14:44.515 "auth": { 00:14:44.515 "dhgroup": "ffdhe3072", 00:14:44.515 "digest": "sha512", 00:14:44.515 "state": "completed" 00:14:44.515 }, 00:14:44.515 "cntlid": 113, 00:14:44.515 "listen_address": { 00:14:44.515 "adrfam": "IPv4", 00:14:44.515 "traddr": "10.0.0.2", 00:14:44.515 "trsvcid": "4420", 00:14:44.515 "trtype": "TCP" 00:14:44.515 }, 00:14:44.515 "peer_address": { 00:14:44.515 "adrfam": "IPv4", 00:14:44.515 "traddr": "10.0.0.1", 00:14:44.515 "trsvcid": "44490", 00:14:44.515 "trtype": "TCP" 00:14:44.515 }, 00:14:44.515 "qid": 0, 00:14:44.515 "state": "enabled", 00:14:44.515 "thread": "nvmf_tgt_poll_group_000" 00:14:44.515 } 00:14:44.515 ]' 00:14:44.515 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.772 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.772 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.772 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.772 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.772 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.772 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.772 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.029 19:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.595 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.852 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.418 00:14:46.418 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.418 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.418 19:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.676 { 00:14:46.676 "auth": { 00:14:46.676 "dhgroup": "ffdhe3072", 00:14:46.676 "digest": "sha512", 00:14:46.676 "state": "completed" 00:14:46.676 }, 00:14:46.676 "cntlid": 115, 00:14:46.676 "listen_address": { 00:14:46.676 "adrfam": "IPv4", 00:14:46.676 "traddr": "10.0.0.2", 00:14:46.676 "trsvcid": "4420", 00:14:46.676 "trtype": "TCP" 00:14:46.676 }, 00:14:46.676 "peer_address": { 00:14:46.676 "adrfam": "IPv4", 00:14:46.676 "traddr": "10.0.0.1", 00:14:46.676 "trsvcid": "44532", 00:14:46.676 "trtype": "TCP" 00:14:46.676 }, 00:14:46.676 "qid": 0, 00:14:46.676 "state": "enabled", 00:14:46.676 "thread": "nvmf_tgt_poll_group_000" 00:14:46.676 } 00:14:46.676 ]' 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.676 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.933 19:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.500 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.756 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.319 00:14:48.319 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.319 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.319 19:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.319 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.595 { 00:14:48.595 "auth": { 00:14:48.595 "dhgroup": "ffdhe3072", 00:14:48.595 "digest": "sha512", 00:14:48.595 "state": "completed" 00:14:48.595 }, 00:14:48.595 "cntlid": 117, 00:14:48.595 "listen_address": { 00:14:48.595 "adrfam": "IPv4", 00:14:48.595 "traddr": "10.0.0.2", 00:14:48.595 "trsvcid": "4420", 00:14:48.595 "trtype": "TCP" 00:14:48.595 }, 00:14:48.595 "peer_address": { 00:14:48.595 "adrfam": "IPv4", 00:14:48.595 "traddr": "10.0.0.1", 00:14:48.595 "trsvcid": "44562", 00:14:48.595 "trtype": "TCP" 00:14:48.595 }, 00:14:48.595 "qid": 0, 00:14:48.595 "state": "enabled", 00:14:48.595 "thread": "nvmf_tgt_poll_group_000" 00:14:48.595 } 00:14:48.595 ]' 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.595 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.854 19:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.433 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.691 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.948 00:14:50.206 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.206 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.206 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.463 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.463 19:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.463 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.463 19:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.463 19:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.464 { 00:14:50.464 "auth": { 00:14:50.464 "dhgroup": "ffdhe3072", 00:14:50.464 "digest": "sha512", 00:14:50.464 "state": "completed" 00:14:50.464 }, 00:14:50.464 "cntlid": 119, 00:14:50.464 "listen_address": { 00:14:50.464 "adrfam": "IPv4", 00:14:50.464 "traddr": "10.0.0.2", 00:14:50.464 "trsvcid": "4420", 00:14:50.464 "trtype": "TCP" 00:14:50.464 }, 00:14:50.464 "peer_address": { 00:14:50.464 "adrfam": "IPv4", 00:14:50.464 "traddr": "10.0.0.1", 00:14:50.464 "trsvcid": "44592", 00:14:50.464 "trtype": "TCP" 00:14:50.464 }, 00:14:50.464 "qid": 0, 00:14:50.464 "state": "enabled", 00:14:50.464 "thread": "nvmf_tgt_poll_group_000" 00:14:50.464 } 00:14:50.464 ]' 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.464 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.721 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:51.287 19:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.287 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.544 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.109 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.109 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.109 { 00:14:52.109 "auth": { 00:14:52.109 "dhgroup": "ffdhe4096", 00:14:52.109 "digest": "sha512", 00:14:52.109 "state": "completed" 00:14:52.109 }, 00:14:52.109 "cntlid": 121, 00:14:52.109 "listen_address": { 00:14:52.109 "adrfam": "IPv4", 00:14:52.109 "traddr": "10.0.0.2", 00:14:52.109 "trsvcid": "4420", 00:14:52.109 "trtype": "TCP" 00:14:52.109 }, 00:14:52.109 "peer_address": { 00:14:52.109 "adrfam": "IPv4", 00:14:52.109 "traddr": "10.0.0.1", 00:14:52.109 "trsvcid": "33568", 00:14:52.109 "trtype": "TCP" 00:14:52.109 }, 00:14:52.109 "qid": 0, 00:14:52.109 "state": "enabled", 00:14:52.109 "thread": "nvmf_tgt_poll_group_000" 00:14:52.109 } 00:14:52.109 ]' 00:14:52.367 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.367 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.367 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.367 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.367 19:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.367 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.367 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.367 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.624 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.191 19:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.473 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.730 00:14:53.988 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.988 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.988 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.246 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.246 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.246 19:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.246 19:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.246 19:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.246 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.246 { 00:14:54.246 "auth": { 00:14:54.246 "dhgroup": "ffdhe4096", 00:14:54.246 "digest": "sha512", 00:14:54.246 "state": "completed" 00:14:54.246 }, 00:14:54.246 "cntlid": 123, 00:14:54.246 "listen_address": { 00:14:54.246 "adrfam": "IPv4", 00:14:54.246 "traddr": "10.0.0.2", 00:14:54.246 "trsvcid": "4420", 00:14:54.246 "trtype": "TCP" 00:14:54.246 }, 00:14:54.246 "peer_address": { 00:14:54.246 "adrfam": "IPv4", 00:14:54.246 "traddr": "10.0.0.1", 00:14:54.246 "trsvcid": "33596", 00:14:54.246 "trtype": "TCP" 00:14:54.246 }, 00:14:54.246 "qid": 0, 00:14:54.246 "state": "enabled", 00:14:54.246 "thread": "nvmf_tgt_poll_group_000" 00:14:54.246 } 00:14:54.246 ]' 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.247 19:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.505 19:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:55.439 19:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.439 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.005 00:14:56.005 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.005 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.005 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.263 { 00:14:56.263 "auth": { 00:14:56.263 "dhgroup": "ffdhe4096", 00:14:56.263 "digest": "sha512", 00:14:56.263 "state": "completed" 00:14:56.263 }, 00:14:56.263 "cntlid": 125, 00:14:56.263 "listen_address": { 00:14:56.263 "adrfam": "IPv4", 00:14:56.263 "traddr": "10.0.0.2", 00:14:56.263 "trsvcid": "4420", 00:14:56.263 "trtype": "TCP" 00:14:56.263 }, 00:14:56.263 "peer_address": { 00:14:56.263 "adrfam": "IPv4", 00:14:56.263 "traddr": "10.0.0.1", 00:14:56.263 "trsvcid": "33624", 00:14:56.263 "trtype": "TCP" 00:14:56.263 }, 00:14:56.263 "qid": 0, 00:14:56.263 "state": "enabled", 00:14:56.263 "thread": "nvmf_tgt_poll_group_000" 00:14:56.263 } 00:14:56.263 ]' 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.263 19:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.521 19:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:57.456 19:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.456 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.021 00:14:58.021 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.021 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.021 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.279 { 00:14:58.279 "auth": { 00:14:58.279 "dhgroup": "ffdhe4096", 00:14:58.279 "digest": "sha512", 00:14:58.279 "state": "completed" 00:14:58.279 }, 00:14:58.279 "cntlid": 127, 00:14:58.279 "listen_address": { 00:14:58.279 "adrfam": "IPv4", 00:14:58.279 "traddr": "10.0.0.2", 00:14:58.279 "trsvcid": "4420", 00:14:58.279 "trtype": "TCP" 00:14:58.279 }, 00:14:58.279 "peer_address": { 00:14:58.279 "adrfam": "IPv4", 00:14:58.279 "traddr": "10.0.0.1", 00:14:58.279 "trsvcid": "33642", 00:14:58.279 "trtype": "TCP" 00:14:58.279 }, 00:14:58.279 "qid": 0, 00:14:58.279 "state": "enabled", 00:14:58.279 "thread": "nvmf_tgt_poll_group_000" 00:14:58.279 } 00:14:58.279 ]' 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.279 19:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.537 19:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:14:59.471 19:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.471 19:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.729 19:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.729 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.729 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.988 00:14:59.988 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.988 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.988 19:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.553 { 00:15:00.553 "auth": { 00:15:00.553 "dhgroup": "ffdhe6144", 00:15:00.553 "digest": "sha512", 00:15:00.553 "state": "completed" 00:15:00.553 }, 00:15:00.553 "cntlid": 129, 00:15:00.553 "listen_address": { 00:15:00.553 "adrfam": "IPv4", 00:15:00.553 "traddr": "10.0.0.2", 00:15:00.553 "trsvcid": "4420", 00:15:00.553 "trtype": "TCP" 00:15:00.553 }, 00:15:00.553 "peer_address": { 00:15:00.553 "adrfam": "IPv4", 00:15:00.553 "traddr": "10.0.0.1", 00:15:00.553 "trsvcid": "33658", 00:15:00.553 "trtype": "TCP" 00:15:00.553 }, 00:15:00.553 "qid": 0, 00:15:00.553 "state": "enabled", 00:15:00.553 "thread": "nvmf_tgt_poll_group_000" 00:15:00.553 } 00:15:00.553 ]' 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.553 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.811 19:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.379 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.637 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.209 00:15:02.209 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.209 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.209 19:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.467 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.467 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.467 19:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.467 19:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.467 19:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.467 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.467 { 00:15:02.467 "auth": { 00:15:02.467 "dhgroup": "ffdhe6144", 00:15:02.467 "digest": "sha512", 00:15:02.467 "state": "completed" 00:15:02.467 }, 00:15:02.467 "cntlid": 131, 00:15:02.467 "listen_address": { 00:15:02.467 "adrfam": "IPv4", 00:15:02.467 "traddr": "10.0.0.2", 00:15:02.467 "trsvcid": "4420", 00:15:02.467 "trtype": "TCP" 00:15:02.467 }, 00:15:02.467 "peer_address": { 00:15:02.467 "adrfam": "IPv4", 00:15:02.467 "traddr": "10.0.0.1", 00:15:02.467 "trsvcid": "58736", 00:15:02.467 "trtype": "TCP" 00:15:02.467 }, 00:15:02.467 "qid": 0, 00:15:02.467 "state": "enabled", 00:15:02.467 "thread": "nvmf_tgt_poll_group_000" 00:15:02.467 } 00:15:02.467 ]' 00:15:02.467 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.725 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.725 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.725 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:02.725 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.725 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.725 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.725 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.983 19:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.917 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.918 19:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.483 00:15:04.483 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.483 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.483 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.741 { 00:15:04.741 "auth": { 00:15:04.741 "dhgroup": "ffdhe6144", 00:15:04.741 "digest": "sha512", 00:15:04.741 "state": "completed" 00:15:04.741 }, 00:15:04.741 "cntlid": 133, 00:15:04.741 "listen_address": { 00:15:04.741 "adrfam": "IPv4", 00:15:04.741 "traddr": "10.0.0.2", 00:15:04.741 "trsvcid": "4420", 00:15:04.741 "trtype": "TCP" 00:15:04.741 }, 00:15:04.741 "peer_address": { 00:15:04.741 "adrfam": "IPv4", 00:15:04.741 "traddr": "10.0.0.1", 00:15:04.741 "trsvcid": "58764", 00:15:04.741 "trtype": "TCP" 00:15:04.741 }, 00:15:04.741 "qid": 0, 00:15:04.741 "state": "enabled", 00:15:04.741 "thread": "nvmf_tgt_poll_group_000" 00:15:04.741 } 00:15:04.741 ]' 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.741 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.999 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:04.999 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.999 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.999 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.999 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.257 19:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.823 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.082 19:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.648 00:15:06.648 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.648 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.648 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.904 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.904 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.904 19:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.904 19:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.904 19:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.904 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.904 { 00:15:06.904 "auth": { 00:15:06.904 "dhgroup": "ffdhe6144", 00:15:06.904 "digest": "sha512", 00:15:06.904 "state": "completed" 00:15:06.904 }, 00:15:06.904 "cntlid": 135, 00:15:06.904 "listen_address": { 00:15:06.904 "adrfam": "IPv4", 00:15:06.904 "traddr": "10.0.0.2", 00:15:06.904 "trsvcid": "4420", 00:15:06.904 "trtype": "TCP" 00:15:06.904 }, 00:15:06.904 "peer_address": { 00:15:06.904 "adrfam": "IPv4", 00:15:06.904 "traddr": "10.0.0.1", 00:15:06.904 "trsvcid": "58796", 00:15:06.904 "trtype": "TCP" 00:15:06.904 }, 00:15:06.904 "qid": 0, 00:15:06.904 "state": "enabled", 00:15:06.904 "thread": "nvmf_tgt_poll_group_000" 00:15:06.904 } 00:15:06.904 ]' 00:15:06.904 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.905 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.905 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.905 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:06.905 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.162 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.162 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.162 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.420 19:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:07.984 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.242 19:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.807 00:15:08.807 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.807 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.807 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.064 { 00:15:09.064 "auth": { 00:15:09.064 "dhgroup": "ffdhe8192", 00:15:09.064 "digest": "sha512", 00:15:09.064 "state": "completed" 00:15:09.064 }, 00:15:09.064 "cntlid": 137, 00:15:09.064 "listen_address": { 00:15:09.064 "adrfam": "IPv4", 00:15:09.064 "traddr": "10.0.0.2", 00:15:09.064 "trsvcid": "4420", 00:15:09.064 "trtype": "TCP" 00:15:09.064 }, 00:15:09.064 "peer_address": { 00:15:09.064 "adrfam": "IPv4", 00:15:09.064 "traddr": "10.0.0.1", 00:15:09.064 "trsvcid": "58830", 00:15:09.064 "trtype": "TCP" 00:15:09.064 }, 00:15:09.064 "qid": 0, 00:15:09.064 "state": "enabled", 00:15:09.064 "thread": "nvmf_tgt_poll_group_000" 00:15:09.064 } 00:15:09.064 ]' 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.064 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.322 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.322 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.322 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.322 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.322 19:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.580 19:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.146 19:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.476 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.040 00:15:11.040 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.040 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.040 19:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.299 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.299 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.299 19:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.299 19:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.299 19:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.299 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.299 { 00:15:11.299 "auth": { 00:15:11.299 "dhgroup": "ffdhe8192", 00:15:11.299 "digest": "sha512", 00:15:11.299 "state": "completed" 00:15:11.299 }, 00:15:11.299 "cntlid": 139, 00:15:11.299 "listen_address": { 00:15:11.299 "adrfam": "IPv4", 00:15:11.299 "traddr": "10.0.0.2", 00:15:11.299 "trsvcid": "4420", 00:15:11.299 "trtype": "TCP" 00:15:11.299 }, 00:15:11.299 "peer_address": { 00:15:11.299 "adrfam": "IPv4", 00:15:11.299 "traddr": "10.0.0.1", 00:15:11.299 "trsvcid": "58854", 00:15:11.299 "trtype": "TCP" 00:15:11.299 }, 00:15:11.299 "qid": 0, 00:15:11.299 "state": "enabled", 00:15:11.299 "thread": "nvmf_tgt_poll_group_000" 00:15:11.299 } 00:15:11.299 ]' 00:15:11.299 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.558 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.558 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.558 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.558 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.558 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.558 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.558 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.817 19:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:01:ODNlYWE2NTNhYzA1NzRhN2UxMTI1MDFiZjFlZTE1YmawxenR: --dhchap-ctrl-secret DHHC-1:02:YTUyY2E3OTBjMjc3NThkYWVkNmY0NjdmZjQ2MWM1ZjI2MmY5ZWFkY2ZkM2IzNWUx/t7z0A==: 00:15:12.395 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.395 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:12.395 19:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.395 19:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.396 19:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.396 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.396 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.396 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.654 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.222 00:15:13.222 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.222 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.222 19:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.481 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.481 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.481 19:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.481 19:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.481 19:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.481 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.481 { 00:15:13.481 "auth": { 00:15:13.481 "dhgroup": "ffdhe8192", 00:15:13.481 "digest": "sha512", 00:15:13.481 "state": "completed" 00:15:13.481 }, 00:15:13.481 "cntlid": 141, 00:15:13.481 "listen_address": { 00:15:13.481 "adrfam": "IPv4", 00:15:13.481 "traddr": "10.0.0.2", 00:15:13.482 "trsvcid": "4420", 00:15:13.482 "trtype": "TCP" 00:15:13.482 }, 00:15:13.482 "peer_address": { 00:15:13.482 "adrfam": "IPv4", 00:15:13.482 "traddr": "10.0.0.1", 00:15:13.482 "trsvcid": "56844", 00:15:13.482 "trtype": "TCP" 00:15:13.482 }, 00:15:13.482 "qid": 0, 00:15:13.482 "state": "enabled", 00:15:13.482 "thread": "nvmf_tgt_poll_group_000" 00:15:13.482 } 00:15:13.482 ]' 00:15:13.482 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.740 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.740 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.740 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.740 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.740 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.740 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.740 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.998 19:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:02:MDhkMGNlNGM3MTg5ZDJjZTQ2ZTMzMzY3YWY3NjJkNDNkOTI1MmVjNTg1YTg5ZGY0xqCK4Q==: --dhchap-ctrl-secret DHHC-1:01:NjIxZTQ3N2Q0ODYzOGFiMWNiYzFmMmQ0MmUzZDlhMmFwdZMb: 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:14.636 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.895 19:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.154 19:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.154 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.154 19:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.721 00:15:15.721 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.721 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.721 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.980 { 00:15:15.980 "auth": { 00:15:15.980 "dhgroup": "ffdhe8192", 00:15:15.980 "digest": "sha512", 00:15:15.980 "state": "completed" 00:15:15.980 }, 00:15:15.980 "cntlid": 143, 00:15:15.980 "listen_address": { 00:15:15.980 "adrfam": "IPv4", 00:15:15.980 "traddr": "10.0.0.2", 00:15:15.980 "trsvcid": "4420", 00:15:15.980 "trtype": "TCP" 00:15:15.980 }, 00:15:15.980 "peer_address": { 00:15:15.980 "adrfam": "IPv4", 00:15:15.980 "traddr": "10.0.0.1", 00:15:15.980 "trsvcid": "56856", 00:15:15.980 "trtype": "TCP" 00:15:15.980 }, 00:15:15.980 "qid": 0, 00:15:15.980 "state": "enabled", 00:15:15.980 "thread": "nvmf_tgt_poll_group_000" 00:15:15.980 } 00:15:15.980 ]' 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.980 19:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.546 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.114 19:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.372 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.938 00:15:18.220 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.220 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.220 19:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.492 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.492 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.492 19:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.492 19:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.492 19:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.492 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.492 { 00:15:18.492 "auth": { 00:15:18.492 "dhgroup": "ffdhe8192", 00:15:18.492 "digest": "sha512", 00:15:18.492 "state": "completed" 00:15:18.492 }, 00:15:18.492 "cntlid": 145, 00:15:18.492 "listen_address": { 00:15:18.492 "adrfam": "IPv4", 00:15:18.492 "traddr": "10.0.0.2", 00:15:18.492 "trsvcid": "4420", 00:15:18.492 "trtype": "TCP" 00:15:18.493 }, 00:15:18.493 "peer_address": { 00:15:18.493 "adrfam": "IPv4", 00:15:18.493 "traddr": "10.0.0.1", 00:15:18.493 "trsvcid": "56878", 00:15:18.493 "trtype": "TCP" 00:15:18.493 }, 00:15:18.493 "qid": 0, 00:15:18.493 "state": "enabled", 00:15:18.493 "thread": "nvmf_tgt_poll_group_000" 00:15:18.493 } 00:15:18.493 ]' 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.493 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.751 19:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:00:ODc5M2U1MTYxOWNkYzdkNzdjM2Q2OTk0MTNiMmEzZjk2NWJkZWQ4N2NjYWZjYTQwT87LHQ==: --dhchap-ctrl-secret DHHC-1:03:Y2I3NTE5NjBiMzE1YjUzYTk2MzIyZGI1ZjBiNDY2M2Y0NGM0NDhiZmNmODgwYWNiOTZlMDVlZTQ3ZDgzNmMxMEXx09I=: 00:15:19.316 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:19.575 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:20.143 2024/07/15 19:43:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:20.143 request: 00:15:20.143 { 00:15:20.143 "method": "bdev_nvme_attach_controller", 00:15:20.143 "params": { 00:15:20.143 "name": "nvme0", 00:15:20.143 "trtype": "tcp", 00:15:20.143 "traddr": "10.0.0.2", 00:15:20.143 "adrfam": "ipv4", 00:15:20.143 "trsvcid": "4420", 00:15:20.143 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb", 00:15:20.143 "prchk_reftag": false, 00:15:20.143 "prchk_guard": false, 00:15:20.143 "hdgst": false, 00:15:20.143 "ddgst": false, 00:15:20.143 "dhchap_key": "key2" 00:15:20.143 } 00:15:20.143 } 00:15:20.143 Got JSON-RPC error response 00:15:20.143 GoRPCClient: error on JSON-RPC call 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.143 19:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.709 2024/07/15 19:43:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:20.709 request: 00:15:20.709 { 00:15:20.709 "method": "bdev_nvme_attach_controller", 00:15:20.709 "params": { 00:15:20.709 "name": "nvme0", 00:15:20.709 "trtype": "tcp", 00:15:20.709 "traddr": "10.0.0.2", 00:15:20.709 "adrfam": "ipv4", 00:15:20.709 "trsvcid": "4420", 00:15:20.709 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb", 00:15:20.709 "prchk_reftag": false, 00:15:20.710 "prchk_guard": false, 00:15:20.710 "hdgst": false, 00:15:20.710 "ddgst": false, 00:15:20.710 "dhchap_key": "key1", 00:15:20.710 "dhchap_ctrlr_key": "ckey2" 00:15:20.710 } 00:15:20.710 } 00:15:20.710 Got JSON-RPC error response 00:15:20.710 GoRPCClient: error on JSON-RPC call 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key1 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.710 19:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.277 2024/07/15 19:43:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:21.277 request: 00:15:21.277 { 00:15:21.277 "method": "bdev_nvme_attach_controller", 00:15:21.277 "params": { 00:15:21.277 "name": "nvme0", 00:15:21.277 "trtype": "tcp", 00:15:21.277 "traddr": "10.0.0.2", 00:15:21.277 "adrfam": "ipv4", 00:15:21.277 "trsvcid": "4420", 00:15:21.277 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:21.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb", 00:15:21.277 "prchk_reftag": false, 00:15:21.277 "prchk_guard": false, 00:15:21.277 "hdgst": false, 00:15:21.277 "ddgst": false, 00:15:21.277 "dhchap_key": "key1", 00:15:21.277 "dhchap_ctrlr_key": "ckey1" 00:15:21.277 } 00:15:21.277 } 00:15:21.277 Got JSON-RPC error response 00:15:21.277 GoRPCClient: error on JSON-RPC call 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 78059 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78059 ']' 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78059 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78059 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78059' 00:15:21.277 killing process with pid 78059 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78059 00:15:21.277 19:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78059 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82914 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82914 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82914 ']' 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.535 19:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82914 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82914 ']' 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.470 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.038 19:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.605 00:15:23.605 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.605 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.605 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.173 { 00:15:24.173 "auth": { 00:15:24.173 "dhgroup": "ffdhe8192", 00:15:24.173 "digest": "sha512", 00:15:24.173 "state": "completed" 00:15:24.173 }, 00:15:24.173 "cntlid": 1, 00:15:24.173 "listen_address": { 00:15:24.173 "adrfam": "IPv4", 00:15:24.173 "traddr": "10.0.0.2", 00:15:24.173 "trsvcid": "4420", 00:15:24.173 "trtype": "TCP" 00:15:24.173 }, 00:15:24.173 "peer_address": { 00:15:24.173 "adrfam": "IPv4", 00:15:24.173 "traddr": "10.0.0.1", 00:15:24.173 "trsvcid": "56546", 00:15:24.173 "trtype": "TCP" 00:15:24.173 }, 00:15:24.173 "qid": 0, 00:15:24.173 "state": "enabled", 00:15:24.173 "thread": "nvmf_tgt_poll_group_000" 00:15:24.173 } 00:15:24.173 ]' 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.173 19:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.432 19:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-secret DHHC-1:03:NmJiOTNiNzZmY2I3OTNiODQ3NzU3Y2YzYmJkN2VjZjcyYTdkNWZhYjA0YzAwNzE3N2MyYjhkMDYyYTZjM2VlM1JRYdk=: 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --dhchap-key key3 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:25.366 19:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.366 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.934 2024/07/15 19:43:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:25.934 request: 00:15:25.934 { 00:15:25.934 "method": "bdev_nvme_attach_controller", 00:15:25.934 "params": { 00:15:25.934 "name": "nvme0", 00:15:25.934 "trtype": "tcp", 00:15:25.934 "traddr": "10.0.0.2", 00:15:25.934 "adrfam": "ipv4", 00:15:25.934 "trsvcid": "4420", 00:15:25.934 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:25.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb", 00:15:25.934 "prchk_reftag": false, 00:15:25.934 "prchk_guard": false, 00:15:25.934 "hdgst": false, 00:15:25.934 "ddgst": false, 00:15:25.934 "dhchap_key": "key3" 00:15:25.934 } 00:15:25.934 } 00:15:25.934 Got JSON-RPC error response 00:15:25.934 GoRPCClient: error on JSON-RPC call 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.934 19:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.502 2024/07/15 19:43:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:26.502 request: 00:15:26.502 { 00:15:26.502 "method": "bdev_nvme_attach_controller", 00:15:26.502 "params": { 00:15:26.502 "name": "nvme0", 00:15:26.502 "trtype": "tcp", 00:15:26.502 "traddr": "10.0.0.2", 00:15:26.502 "adrfam": "ipv4", 00:15:26.502 "trsvcid": "4420", 00:15:26.502 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:26.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb", 00:15:26.502 "prchk_reftag": false, 00:15:26.502 "prchk_guard": false, 00:15:26.502 "hdgst": false, 00:15:26.502 "ddgst": false, 00:15:26.502 "dhchap_key": "key3" 00:15:26.502 } 00:15:26.502 } 00:15:26.502 Got JSON-RPC error response 00:15:26.502 GoRPCClient: error on JSON-RPC call 00:15:26.502 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:26.502 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.502 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.502 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.502 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.503 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:26.761 2024/07/15 19:43:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:26.761 request: 00:15:26.761 { 00:15:26.761 "method": "bdev_nvme_attach_controller", 00:15:26.761 "params": { 00:15:26.761 "name": "nvme0", 00:15:26.761 "trtype": "tcp", 00:15:26.761 "traddr": "10.0.0.2", 00:15:26.761 "adrfam": "ipv4", 00:15:26.761 "trsvcid": "4420", 00:15:26.761 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:26.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb", 00:15:26.761 "prchk_reftag": false, 00:15:26.761 "prchk_guard": false, 00:15:26.761 "hdgst": false, 00:15:26.761 "ddgst": false, 00:15:26.761 "dhchap_key": "key0", 00:15:26.761 "dhchap_ctrlr_key": "key1" 00:15:26.761 } 00:15:26.761 } 00:15:26.761 Got JSON-RPC error response 00:15:26.761 GoRPCClient: error on JSON-RPC call 00:15:27.020 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:27.020 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.020 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.020 19:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.020 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:27.020 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:27.278 00:15:27.278 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:15:27.278 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:15:27.278 19:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.536 19:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.536 19:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.536 19:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.793 19:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:15:27.793 19:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:15:27.793 19:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78103 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78103 ']' 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78103 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78103 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78103' 00:15:27.794 killing process with pid 78103 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78103 00:15:27.794 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78103 00:15:28.051 19:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:28.051 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.051 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:15:28.051 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.051 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:15:28.052 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.052 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.310 rmmod nvme_tcp 00:15:28.310 rmmod nvme_fabrics 00:15:28.310 rmmod nvme_keyring 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82914 ']' 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82914 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82914 ']' 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82914 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82914 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:28.310 killing process with pid 82914 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82914' 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82914 00:15:28.310 19:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82914 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.l8U /tmp/spdk.key-sha256.TG4 /tmp/spdk.key-sha384.Gk0 /tmp/spdk.key-sha512.ONB /tmp/spdk.key-sha512.Fk5 /tmp/spdk.key-sha384.VJA /tmp/spdk.key-sha256.dso '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:28.568 00:15:28.568 real 2m48.860s 00:15:28.568 user 6m48.828s 00:15:28.568 sys 0m21.979s 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.568 ************************************ 00:15:28.568 END TEST nvmf_auth_target 00:15:28.568 ************************************ 00:15:28.568 19:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.568 19:43:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.568 19:43:54 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:15:28.568 19:43:54 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:28.568 19:43:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:28.568 19:43:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.568 19:43:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.568 ************************************ 00:15:28.568 START TEST nvmf_bdevio_no_huge 00:15:28.568 ************************************ 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:28.568 * Looking for test storage... 00:15:28.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.568 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:28.828 Cannot find device "nvmf_tgt_br" 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.828 Cannot find device "nvmf_tgt_br2" 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:28.828 Cannot find device "nvmf_tgt_br" 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:28.828 Cannot find device "nvmf_tgt_br2" 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.828 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.829 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.829 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.829 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.829 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.829 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.829 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:29.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:29.087 00:15:29.087 --- 10.0.0.2 ping statistics --- 00:15:29.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.087 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:29.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:15:29.087 00:15:29.087 --- 10.0.0.3 ping statistics --- 00:15:29.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.087 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:29.087 00:15:29.087 --- 10.0.0.1 ping statistics --- 00:15:29.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.087 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83337 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83337 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83337 ']' 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:29.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.087 19:43:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:29.087 [2024-07-15 19:43:54.822519] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:15:29.087 [2024-07-15 19:43:54.822629] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:29.346 [2024-07-15 19:43:54.999589] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.913 [2024-07-15 19:43:55.440500] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.913 [2024-07-15 19:43:55.440647] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.913 [2024-07-15 19:43:55.440679] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.913 [2024-07-15 19:43:55.441664] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.913 [2024-07-15 19:43:55.441679] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.913 [2024-07-15 19:43:55.442233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:29.913 [2024-07-15 19:43:55.442580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:29.913 [2024-07-15 19:43:55.442714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:29.913 [2024-07-15 19:43:55.443233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 [2024-07-15 19:43:55.820981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 Malloc0 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:30.171 [2024-07-15 19:43:55.882308] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:30.171 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:30.172 { 00:15:30.172 "params": { 00:15:30.172 "name": "Nvme$subsystem", 00:15:30.172 "trtype": "$TEST_TRANSPORT", 00:15:30.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.172 "adrfam": "ipv4", 00:15:30.172 "trsvcid": "$NVMF_PORT", 00:15:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.172 "hdgst": ${hdgst:-false}, 00:15:30.172 "ddgst": ${ddgst:-false} 00:15:30.172 }, 00:15:30.172 "method": "bdev_nvme_attach_controller" 00:15:30.172 } 00:15:30.172 EOF 00:15:30.172 )") 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:30.172 19:43:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:30.172 "params": { 00:15:30.172 "name": "Nvme1", 00:15:30.172 "trtype": "tcp", 00:15:30.172 "traddr": "10.0.0.2", 00:15:30.172 "adrfam": "ipv4", 00:15:30.172 "trsvcid": "4420", 00:15:30.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.172 "hdgst": false, 00:15:30.172 "ddgst": false 00:15:30.172 }, 00:15:30.172 "method": "bdev_nvme_attach_controller" 00:15:30.172 }' 00:15:30.172 [2024-07-15 19:43:55.942586] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:15:30.172 [2024-07-15 19:43:55.942678] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83391 ] 00:15:30.447 [2024-07-15 19:43:56.086238] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.705 [2024-07-15 19:43:56.247673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.705 [2024-07-15 19:43:56.247813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.705 [2024-07-15 19:43:56.247819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.705 I/O targets: 00:15:30.705 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:30.705 00:15:30.705 00:15:30.705 CUnit - A unit testing framework for C - Version 2.1-3 00:15:30.705 http://cunit.sourceforge.net/ 00:15:30.705 00:15:30.705 00:15:30.705 Suite: bdevio tests on: Nvme1n1 00:15:30.963 Test: blockdev write read block ...passed 00:15:30.963 Test: blockdev write zeroes read block ...passed 00:15:30.963 Test: blockdev write zeroes read no split ...passed 00:15:30.963 Test: blockdev write zeroes read split ...passed 00:15:30.963 Test: blockdev write zeroes read split partial ...passed 00:15:30.963 Test: blockdev reset ...[2024-07-15 19:43:56.592061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:30.963 [2024-07-15 19:43:56.592192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1015600 (9): Bad file descriptor 00:15:30.963 [2024-07-15 19:43:56.607050] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:30.963 passed 00:15:30.963 Test: blockdev write read 8 blocks ...passed 00:15:30.963 Test: blockdev write read size > 128k ...passed 00:15:30.963 Test: blockdev write read invalid size ...passed 00:15:30.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.963 Test: blockdev write read max offset ...passed 00:15:30.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.963 Test: blockdev writev readv 8 blocks ...passed 00:15:30.963 Test: blockdev writev readv 30 x 1block ...passed 00:15:31.221 Test: blockdev writev readv block ...passed 00:15:31.221 Test: blockdev writev readv size > 128k ...passed 00:15:31.221 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:31.221 Test: blockdev comparev and writev ...[2024-07-15 19:43:56.777539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.221 [2024-07-15 19:43:56.777592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:31.221 [2024-07-15 19:43:56.777614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.221 [2024-07-15 19:43:56.777626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:31.221 [2024-07-15 19:43:56.778293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.222 [2024-07-15 19:43:56.778322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.778341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.222 [2024-07-15 19:43:56.778353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.778938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.222 [2024-07-15 19:43:56.778967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.778986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.222 [2024-07-15 19:43:56.778997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.779361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.222 [2024-07-15 19:43:56.779385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.779404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:31.222 [2024-07-15 19:43:56.779414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:31.222 passed 00:15:31.222 Test: blockdev nvme passthru rw ...passed 00:15:31.222 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:43:56.861498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.222 [2024-07-15 19:43:56.861534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.861653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.222 [2024-07-15 19:43:56.861677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.861816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.222 [2024-07-15 19:43:56.861832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:31.222 [2024-07-15 19:43:56.861940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:31.222 [2024-07-15 19:43:56.861955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:31.222 passed 00:15:31.222 Test: blockdev nvme admin passthru ...passed 00:15:31.222 Test: blockdev copy ...passed 00:15:31.222 00:15:31.222 Run Summary: Type Total Ran Passed Failed Inactive 00:15:31.222 suites 1 1 n/a 0 0 00:15:31.222 tests 23 23 23 0 0 00:15:31.222 asserts 152 152 152 0 n/a 00:15:31.222 00:15:31.222 Elapsed time = 0.934 seconds 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.787 rmmod nvme_tcp 00:15:31.787 rmmod nvme_fabrics 00:15:31.787 rmmod nvme_keyring 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83337 ']' 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83337 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83337 ']' 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83337 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83337 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:31.787 killing process with pid 83337 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83337' 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83337 00:15:31.787 19:43:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83337 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:32.425 ************************************ 00:15:32.425 END TEST nvmf_bdevio_no_huge 00:15:32.425 ************************************ 00:15:32.425 00:15:32.425 real 0m3.895s 00:15:32.425 user 0m10.559s 00:15:32.425 sys 0m1.813s 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.425 19:43:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.425 19:43:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:32.425 19:43:58 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:32.425 19:43:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:32.425 19:43:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.425 19:43:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.695 ************************************ 00:15:32.695 START TEST nvmf_tls 00:15:32.695 ************************************ 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:32.695 * Looking for test storage... 00:15:32.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:32.695 Cannot find device "nvmf_tgt_br" 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.695 Cannot find device "nvmf_tgt_br2" 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:32.695 Cannot find device "nvmf_tgt_br" 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:15:32.695 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:32.696 Cannot find device "nvmf_tgt_br2" 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.696 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:32.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:32.954 00:15:32.954 --- 10.0.0.2 ping statistics --- 00:15:32.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.954 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:32.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:32.954 00:15:32.954 --- 10.0.0.3 ping statistics --- 00:15:32.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.954 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:32.954 00:15:32.954 --- 10.0.0.1 ping statistics --- 00:15:32.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.954 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83575 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83575 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83575 ']' 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.954 19:43:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.954 [2024-07-15 19:43:58.716967] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:15:32.954 [2024-07-15 19:43:58.717080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.213 [2024-07-15 19:43:58.855337] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.213 [2024-07-15 19:43:58.967453] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.213 [2024-07-15 19:43:58.967516] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.213 [2024-07-15 19:43:58.967542] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.213 [2024-07-15 19:43:58.967550] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.213 [2024-07-15 19:43:58.967557] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.213 [2024-07-15 19:43:58.967580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:34.146 19:43:59 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:34.404 true 00:15:34.404 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:34.404 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:34.661 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:34.661 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:34.661 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:34.919 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:34.919 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:35.177 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:35.177 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:35.177 19:44:00 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:35.436 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:35.436 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:35.694 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:35.694 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:35.694 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:35.694 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:35.952 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:35.952 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:35.952 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:36.210 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:36.210 19:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:36.468 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:36.468 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:36.468 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:36.726 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:36.726 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.PV0nJogWFm 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.2PCMcxUtKY 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.PV0nJogWFm 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2PCMcxUtKY 00:15:36.985 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:37.243 19:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:37.808 19:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.PV0nJogWFm 00:15:37.808 19:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PV0nJogWFm 00:15:37.808 19:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:37.808 [2024-07-15 19:44:03.558502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.808 19:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:38.375 19:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:38.375 [2024-07-15 19:44:04.138634] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:38.375 [2024-07-15 19:44:04.138890] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.633 19:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:38.633 malloc0 00:15:38.633 19:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:38.891 19:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PV0nJogWFm 00:15:39.149 [2024-07-15 19:44:04.910625] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:39.407 19:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PV0nJogWFm 00:15:49.402 Initializing NVMe Controllers 00:15:49.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:49.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:49.402 Initialization complete. Launching workers. 00:15:49.402 ======================================================== 00:15:49.402 Latency(us) 00:15:49.402 Device Information : IOPS MiB/s Average min max 00:15:49.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9651.19 37.70 6632.80 1718.98 14606.91 00:15:49.402 ======================================================== 00:15:49.402 Total : 9651.19 37.70 6632.80 1718.98 14606.91 00:15:49.402 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PV0nJogWFm 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PV0nJogWFm' 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83941 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83941 /var/tmp/bdevperf.sock 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83941 ']' 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.402 19:44:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.402 [2024-07-15 19:44:15.183298] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:15:49.402 [2024-07-15 19:44:15.183581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83941 ] 00:15:49.660 [2024-07-15 19:44:15.319207] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.918 [2024-07-15 19:44:15.451185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.483 19:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.483 19:44:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:50.483 19:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PV0nJogWFm 00:15:50.742 [2024-07-15 19:44:16.396364] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:50.742 [2024-07-15 19:44:16.396524] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:50.742 TLSTESTn1 00:15:50.742 19:44:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:51.000 Running I/O for 10 seconds... 00:16:00.994 00:16:00.994 Latency(us) 00:16:00.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.994 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:00.994 Verification LBA range: start 0x0 length 0x2000 00:16:00.994 TLSTESTn1 : 10.02 4063.34 15.87 0.00 0.00 31440.99 6911.07 27763.43 00:16:00.994 =================================================================================================================== 00:16:00.994 Total : 4063.34 15.87 0.00 0.00 31440.99 6911.07 27763.43 00:16:00.994 0 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83941 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83941 ']' 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83941 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83941 00:16:00.994 killing process with pid 83941 00:16:00.994 Received shutdown signal, test time was about 10.000000 seconds 00:16:00.994 00:16:00.994 Latency(us) 00:16:00.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.994 =================================================================================================================== 00:16:00.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83941' 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83941 00:16:00.994 [2024-07-15 19:44:26.674312] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:00.994 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83941 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2PCMcxUtKY 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2PCMcxUtKY 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2PCMcxUtKY 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2PCMcxUtKY' 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84108 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84108 /var/tmp/bdevperf.sock 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84108 ']' 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.253 19:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.253 [2024-07-15 19:44:26.974587] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:01.253 [2024-07-15 19:44:26.974686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84108 ] 00:16:01.512 [2024-07-15 19:44:27.114330] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.512 [2024-07-15 19:44:27.235137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.494 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.494 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:02.494 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2PCMcxUtKY 00:16:02.753 [2024-07-15 19:44:28.284098] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:02.753 [2024-07-15 19:44:28.284308] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:02.753 [2024-07-15 19:44:28.294300] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:02.753 [2024-07-15 19:44:28.295023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d6e50 (107): Transport endpoint is not connected 00:16:02.753 [2024-07-15 19:44:28.296007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d6e50 (9): Bad file descriptor 00:16:02.753 [2024-07-15 19:44:28.297005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:02.753 [2024-07-15 19:44:28.297050] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:02.753 [2024-07-15 19:44:28.297082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:02.753 2024/07/15 19:44:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.2PCMcxUtKY subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:02.753 request: 00:16:02.753 { 00:16:02.753 "method": "bdev_nvme_attach_controller", 00:16:02.753 "params": { 00:16:02.753 "name": "TLSTEST", 00:16:02.753 "trtype": "tcp", 00:16:02.753 "traddr": "10.0.0.2", 00:16:02.753 "adrfam": "ipv4", 00:16:02.753 "trsvcid": "4420", 00:16:02.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:02.753 "prchk_reftag": false, 00:16:02.753 "prchk_guard": false, 00:16:02.753 "hdgst": false, 00:16:02.753 "ddgst": false, 00:16:02.753 "psk": "/tmp/tmp.2PCMcxUtKY" 00:16:02.753 } 00:16:02.753 } 00:16:02.753 Got JSON-RPC error response 00:16:02.753 GoRPCClient: error on JSON-RPC call 00:16:02.753 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84108 00:16:02.753 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84108 ']' 00:16:02.753 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84108 00:16:02.753 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:02.753 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.753 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84108 00:16:02.753 killing process with pid 84108 00:16:02.753 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.753 00:16:02.753 Latency(us) 00:16:02.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.753 =================================================================================================================== 00:16:02.754 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:02.754 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:02.754 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:02.754 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84108' 00:16:02.754 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84108 00:16:02.754 [2024-07-15 19:44:28.348110] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:02.754 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84108 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PV0nJogWFm 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PV0nJogWFm 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PV0nJogWFm 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PV0nJogWFm' 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84148 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84148 /var/tmp/bdevperf.sock 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84148 ']' 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.013 19:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.013 [2024-07-15 19:44:28.649400] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:03.013 [2024-07-15 19:44:28.649519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84148 ] 00:16:03.013 [2024-07-15 19:44:28.793069] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.271 [2024-07-15 19:44:28.912716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.PV0nJogWFm 00:16:04.208 [2024-07-15 19:44:29.889909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:04.208 [2024-07-15 19:44:29.890050] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:04.208 [2024-07-15 19:44:29.894983] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:04.208 [2024-07-15 19:44:29.895038] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:04.208 [2024-07-15 19:44:29.895105] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:04.208 [2024-07-15 19:44:29.895738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1543e50 (107): Transport endpoint is not connected 00:16:04.208 [2024-07-15 19:44:29.896718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1543e50 (9): Bad file descriptor 00:16:04.208 [2024-07-15 19:44:29.897714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:04.208 [2024-07-15 19:44:29.897778] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:04.208 [2024-07-15 19:44:29.897793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:04.208 2024/07/15 19:44:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.PV0nJogWFm subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:04.208 request: 00:16:04.208 { 00:16:04.208 "method": "bdev_nvme_attach_controller", 00:16:04.208 "params": { 00:16:04.208 "name": "TLSTEST", 00:16:04.208 "trtype": "tcp", 00:16:04.208 "traddr": "10.0.0.2", 00:16:04.208 "adrfam": "ipv4", 00:16:04.208 "trsvcid": "4420", 00:16:04.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:04.208 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:04.208 "prchk_reftag": false, 00:16:04.208 "prchk_guard": false, 00:16:04.208 "hdgst": false, 00:16:04.208 "ddgst": false, 00:16:04.208 "psk": "/tmp/tmp.PV0nJogWFm" 00:16:04.208 } 00:16:04.208 } 00:16:04.208 Got JSON-RPC error response 00:16:04.208 GoRPCClient: error on JSON-RPC call 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84148 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84148 ']' 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84148 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84148 00:16:04.208 killing process with pid 84148 00:16:04.208 Received shutdown signal, test time was about 10.000000 seconds 00:16:04.208 00:16:04.208 Latency(us) 00:16:04.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.208 =================================================================================================================== 00:16:04.208 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84148' 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84148 00:16:04.208 [2024-07-15 19:44:29.951359] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:04.208 19:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84148 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PV0nJogWFm 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PV0nJogWFm 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.466 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PV0nJogWFm 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PV0nJogWFm' 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84198 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84198 /var/tmp/bdevperf.sock 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84198 ']' 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.467 19:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.725 [2024-07-15 19:44:30.252906] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:04.725 [2024-07-15 19:44:30.253039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84198 ] 00:16:04.725 [2024-07-15 19:44:30.394593] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.984 [2024-07-15 19:44:30.509821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.550 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.550 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:05.550 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PV0nJogWFm 00:16:05.808 [2024-07-15 19:44:31.424891] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.808 [2024-07-15 19:44:31.425002] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:05.808 [2024-07-15 19:44:31.434587] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:05.808 [2024-07-15 19:44:31.434643] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:05.808 [2024-07-15 19:44:31.434710] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:05.808 [2024-07-15 19:44:31.435706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62e50 (107): Transport endpoint is not connected 00:16:05.808 [2024-07-15 19:44:31.436674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc62e50 (9): Bad file descriptor 00:16:05.808 [2024-07-15 19:44:31.437670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:05.808 [2024-07-15 19:44:31.437698] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:05.808 [2024-07-15 19:44:31.437732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:05.808 2024/07/15 19:44:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.PV0nJogWFm subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:05.808 request: 00:16:05.808 { 00:16:05.808 "method": "bdev_nvme_attach_controller", 00:16:05.808 "params": { 00:16:05.808 "name": "TLSTEST", 00:16:05.808 "trtype": "tcp", 00:16:05.808 "traddr": "10.0.0.2", 00:16:05.808 "adrfam": "ipv4", 00:16:05.808 "trsvcid": "4420", 00:16:05.808 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:05.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.808 "prchk_reftag": false, 00:16:05.808 "prchk_guard": false, 00:16:05.808 "hdgst": false, 00:16:05.808 "ddgst": false, 00:16:05.808 "psk": "/tmp/tmp.PV0nJogWFm" 00:16:05.808 } 00:16:05.808 } 00:16:05.808 Got JSON-RPC error response 00:16:05.808 GoRPCClient: error on JSON-RPC call 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84198 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84198 ']' 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84198 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84198 00:16:05.808 killing process with pid 84198 00:16:05.808 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.808 00:16:05.808 Latency(us) 00:16:05.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.808 =================================================================================================================== 00:16:05.808 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84198' 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84198 00:16:05.808 [2024-07-15 19:44:31.483629] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:05.808 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84198 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:06.075 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84239 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84239 /var/tmp/bdevperf.sock 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84239 ']' 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.076 19:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.076 [2024-07-15 19:44:31.760488] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:06.076 [2024-07-15 19:44:31.760599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84239 ] 00:16:06.348 [2024-07-15 19:44:31.892850] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.348 [2024-07-15 19:44:32.014586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.282 19:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.282 19:44:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:07.282 19:44:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:07.282 [2024-07-15 19:44:33.004901] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:07.282 [2024-07-15 19:44:33.006321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfe3e0 (9): Bad file descriptor 00:16:07.282 [2024-07-15 19:44:33.007316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:07.282 [2024-07-15 19:44:33.007345] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:07.282 [2024-07-15 19:44:33.007359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:07.282 2024/07/15 19:44:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:07.282 request: 00:16:07.282 { 00:16:07.282 "method": "bdev_nvme_attach_controller", 00:16:07.282 "params": { 00:16:07.282 "name": "TLSTEST", 00:16:07.282 "trtype": "tcp", 00:16:07.282 "traddr": "10.0.0.2", 00:16:07.282 "adrfam": "ipv4", 00:16:07.282 "trsvcid": "4420", 00:16:07.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.282 "prchk_reftag": false, 00:16:07.282 "prchk_guard": false, 00:16:07.282 "hdgst": false, 00:16:07.282 "ddgst": false 00:16:07.282 } 00:16:07.282 } 00:16:07.282 Got JSON-RPC error response 00:16:07.282 GoRPCClient: error on JSON-RPC call 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84239 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84239 ']' 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84239 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84239 00:16:07.282 killing process with pid 84239 00:16:07.282 Received shutdown signal, test time was about 10.000000 seconds 00:16:07.282 00:16:07.282 Latency(us) 00:16:07.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.282 =================================================================================================================== 00:16:07.282 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84239' 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84239 00:16:07.282 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84239 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83575 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83575 ']' 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83575 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.539 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83575 00:16:07.797 killing process with pid 83575 00:16:07.797 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:07.797 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:07.797 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83575' 00:16:07.797 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83575 00:16:07.797 [2024-07-15 19:44:33.328115] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:07.797 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83575 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wviZjmEfTw 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wviZjmEfTw 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84300 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84300 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84300 ']' 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.055 19:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.055 [2024-07-15 19:44:33.731031] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:08.055 [2024-07-15 19:44:33.731185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.314 [2024-07-15 19:44:33.871811] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.314 [2024-07-15 19:44:33.996575] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.314 [2024-07-15 19:44:33.996665] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.314 [2024-07-15 19:44:33.996691] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.314 [2024-07-15 19:44:33.996699] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.314 [2024-07-15 19:44:33.996706] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.314 [2024-07-15 19:44:33.996739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wviZjmEfTw 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wviZjmEfTw 00:16:09.250 19:44:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:09.508 [2024-07-15 19:44:35.038297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.508 19:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:09.766 19:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:10.024 [2024-07-15 19:44:35.610452] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:10.024 [2024-07-15 19:44:35.610687] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.024 19:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:10.281 malloc0 00:16:10.281 19:44:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:10.539 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wviZjmEfTw 00:16:10.796 [2024-07-15 19:44:36.331215] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wviZjmEfTw 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wviZjmEfTw' 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84404 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84404 /var/tmp/bdevperf.sock 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84404 ']' 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.796 19:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.796 [2024-07-15 19:44:36.402724] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:10.796 [2024-07-15 19:44:36.402836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84404 ] 00:16:10.796 [2024-07-15 19:44:36.536918] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.068 [2024-07-15 19:44:36.660148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.014 19:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.014 19:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:12.014 19:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wviZjmEfTw 00:16:12.271 [2024-07-15 19:44:37.826900] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:12.271 [2024-07-15 19:44:37.827052] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:12.271 TLSTESTn1 00:16:12.271 19:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:12.271 Running I/O for 10 seconds... 00:16:22.284 00:16:22.285 Latency(us) 00:16:22.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.285 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:22.285 Verification LBA range: start 0x0 length 0x2000 00:16:22.285 TLSTESTn1 : 10.02 3935.42 15.37 0.00 0.00 32462.64 7506.85 20852.36 00:16:22.285 =================================================================================================================== 00:16:22.285 Total : 3935.42 15.37 0.00 0.00 32462.64 7506.85 20852.36 00:16:22.285 0 00:16:22.285 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.285 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84404 00:16:22.285 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84404 ']' 00:16:22.285 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84404 00:16:22.285 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:22.285 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.285 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84404 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:22.608 killing process with pid 84404 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84404' 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84404 00:16:22.608 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.608 00:16:22.608 Latency(us) 00:16:22.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.608 =================================================================================================================== 00:16:22.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:22.608 [2024-07-15 19:44:48.084976] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84404 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wviZjmEfTw 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wviZjmEfTw 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wviZjmEfTw 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wviZjmEfTw 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:22.608 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wviZjmEfTw' 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84565 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84565 /var/tmp/bdevperf.sock 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84565 ']' 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.609 19:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.889 [2024-07-15 19:44:48.376770] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:22.889 [2024-07-15 19:44:48.376900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84565 ] 00:16:22.889 [2024-07-15 19:44:48.513287] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.889 [2024-07-15 19:44:48.626455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wviZjmEfTw 00:16:23.851 [2024-07-15 19:44:49.589342] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:23.851 [2024-07-15 19:44:49.589423] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:23.851 [2024-07-15 19:44:49.589434] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wviZjmEfTw 00:16:23.851 2024/07/15 19:44:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.wviZjmEfTw subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:16:23.851 request: 00:16:23.851 { 00:16:23.851 "method": "bdev_nvme_attach_controller", 00:16:23.851 "params": { 00:16:23.851 "name": "TLSTEST", 00:16:23.851 "trtype": "tcp", 00:16:23.851 "traddr": "10.0.0.2", 00:16:23.851 "adrfam": "ipv4", 00:16:23.851 "trsvcid": "4420", 00:16:23.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.851 "prchk_reftag": false, 00:16:23.851 "prchk_guard": false, 00:16:23.851 "hdgst": false, 00:16:23.851 "ddgst": false, 00:16:23.851 "psk": "/tmp/tmp.wviZjmEfTw" 00:16:23.851 } 00:16:23.851 } 00:16:23.851 Got JSON-RPC error response 00:16:23.851 GoRPCClient: error on JSON-RPC call 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84565 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84565 ']' 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84565 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.851 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84565 00:16:24.139 killing process with pid 84565 00:16:24.140 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.140 00:16:24.140 Latency(us) 00:16:24.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.140 =================================================================================================================== 00:16:24.140 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84565' 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84565 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84565 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84300 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84300 ']' 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84300 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84300 00:16:24.140 killing process with pid 84300 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84300' 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84300 00:16:24.140 [2024-07-15 19:44:49.872559] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:24.140 19:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84300 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84614 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84614 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84614 ']' 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.397 19:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.397 [2024-07-15 19:44:50.170074] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:24.397 [2024-07-15 19:44:50.170218] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.656 [2024-07-15 19:44:50.311427] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.656 [2024-07-15 19:44:50.414088] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.656 [2024-07-15 19:44:50.414185] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.656 [2024-07-15 19:44:50.414197] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.656 [2024-07-15 19:44:50.414205] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.656 [2024-07-15 19:44:50.414212] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.656 [2024-07-15 19:44:50.414252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wviZjmEfTw 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wviZjmEfTw 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.wviZjmEfTw 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wviZjmEfTw 00:16:25.592 19:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:25.850 [2024-07-15 19:44:51.453116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.850 19:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:26.108 19:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:26.108 [2024-07-15 19:44:51.885214] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:26.108 [2024-07-15 19:44:51.885451] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.367 19:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:26.367 malloc0 00:16:26.367 19:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:26.626 19:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wviZjmEfTw 00:16:26.885 [2024-07-15 19:44:52.561393] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:26.885 [2024-07-15 19:44:52.561438] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:26.885 [2024-07-15 19:44:52.561471] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:26.885 2024/07/15 19:44:52 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.wviZjmEfTw], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:26.885 request: 00:16:26.885 { 00:16:26.885 "method": "nvmf_subsystem_add_host", 00:16:26.885 "params": { 00:16:26.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.885 "host": "nqn.2016-06.io.spdk:host1", 00:16:26.885 "psk": "/tmp/tmp.wviZjmEfTw" 00:16:26.885 } 00:16:26.885 } 00:16:26.885 Got JSON-RPC error response 00:16:26.885 GoRPCClient: error on JSON-RPC call 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84614 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84614 ']' 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84614 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84614 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:26.885 killing process with pid 84614 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84614' 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84614 00:16:26.885 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84614 00:16:27.143 19:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wviZjmEfTw 00:16:27.143 19:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:27.143 19:44:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84726 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84726 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84726 ']' 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.144 19:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.144 [2024-07-15 19:44:52.910458] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:27.144 [2024-07-15 19:44:52.910610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.402 [2024-07-15 19:44:53.040274] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.402 [2024-07-15 19:44:53.139437] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.402 [2024-07-15 19:44:53.139491] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.402 [2024-07-15 19:44:53.139517] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.402 [2024-07-15 19:44:53.139525] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.402 [2024-07-15 19:44:53.139532] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.402 [2024-07-15 19:44:53.139577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wviZjmEfTw 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wviZjmEfTw 00:16:28.337 19:44:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:28.595 [2024-07-15 19:44:54.145393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.595 19:44:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:28.853 19:44:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:29.111 [2024-07-15 19:44:54.665491] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:29.111 [2024-07-15 19:44:54.665755] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.111 19:44:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:29.369 malloc0 00:16:29.369 19:44:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:29.626 19:44:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wviZjmEfTw 00:16:29.884 [2024-07-15 19:44:55.485727] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84829 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84829 /var/tmp/bdevperf.sock 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84829 ']' 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.884 19:44:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.884 [2024-07-15 19:44:55.578046] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:29.884 [2024-07-15 19:44:55.578256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84829 ] 00:16:30.141 [2024-07-15 19:44:55.726923] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.141 [2024-07-15 19:44:55.831625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.076 19:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.076 19:44:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:31.076 19:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wviZjmEfTw 00:16:31.076 [2024-07-15 19:44:56.783684] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.076 [2024-07-15 19:44:56.784293] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:31.334 TLSTESTn1 00:16:31.334 19:44:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:31.593 19:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:31.593 "subsystems": [ 00:16:31.593 { 00:16:31.593 "subsystem": "keyring", 00:16:31.593 "config": [] 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "subsystem": "iobuf", 00:16:31.593 "config": [ 00:16:31.593 { 00:16:31.593 "method": "iobuf_set_options", 00:16:31.593 "params": { 00:16:31.593 "large_bufsize": 135168, 00:16:31.593 "large_pool_count": 1024, 00:16:31.593 "small_bufsize": 8192, 00:16:31.593 "small_pool_count": 8192 00:16:31.593 } 00:16:31.593 } 00:16:31.593 ] 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "subsystem": "sock", 00:16:31.593 "config": [ 00:16:31.593 { 00:16:31.593 "method": "sock_set_default_impl", 00:16:31.593 "params": { 00:16:31.593 "impl_name": "posix" 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "method": "sock_impl_set_options", 00:16:31.593 "params": { 00:16:31.593 "enable_ktls": false, 00:16:31.593 "enable_placement_id": 0, 00:16:31.593 "enable_quickack": false, 00:16:31.593 "enable_recv_pipe": true, 00:16:31.593 "enable_zerocopy_send_client": false, 00:16:31.593 "enable_zerocopy_send_server": true, 00:16:31.593 "impl_name": "ssl", 00:16:31.593 "recv_buf_size": 4096, 00:16:31.593 "send_buf_size": 4096, 00:16:31.593 "tls_version": 0, 00:16:31.593 "zerocopy_threshold": 0 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "method": "sock_impl_set_options", 00:16:31.593 "params": { 00:16:31.593 "enable_ktls": false, 00:16:31.593 "enable_placement_id": 0, 00:16:31.593 "enable_quickack": false, 00:16:31.593 "enable_recv_pipe": true, 00:16:31.593 "enable_zerocopy_send_client": false, 00:16:31.593 "enable_zerocopy_send_server": true, 00:16:31.593 "impl_name": "posix", 00:16:31.593 "recv_buf_size": 2097152, 00:16:31.593 "send_buf_size": 2097152, 00:16:31.593 "tls_version": 0, 00:16:31.593 "zerocopy_threshold": 0 00:16:31.593 } 00:16:31.593 } 00:16:31.593 ] 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "subsystem": "vmd", 00:16:31.593 "config": [] 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "subsystem": "accel", 00:16:31.593 "config": [ 00:16:31.593 { 00:16:31.593 "method": "accel_set_options", 00:16:31.593 "params": { 00:16:31.593 "buf_count": 2048, 00:16:31.593 "large_cache_size": 16, 00:16:31.593 "sequence_count": 2048, 00:16:31.593 "small_cache_size": 128, 00:16:31.593 "task_count": 2048 00:16:31.593 } 00:16:31.593 } 00:16:31.593 ] 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "subsystem": "bdev", 00:16:31.593 "config": [ 00:16:31.593 { 00:16:31.593 "method": "bdev_set_options", 00:16:31.593 "params": { 00:16:31.593 "bdev_auto_examine": true, 00:16:31.593 "bdev_io_cache_size": 256, 00:16:31.593 "bdev_io_pool_size": 65535, 00:16:31.593 "iobuf_large_cache_size": 16, 00:16:31.593 "iobuf_small_cache_size": 128 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "method": "bdev_raid_set_options", 00:16:31.593 "params": { 00:16:31.593 "process_window_size_kb": 1024 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "method": "bdev_iscsi_set_options", 00:16:31.593 "params": { 00:16:31.593 "timeout_sec": 30 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "method": "bdev_nvme_set_options", 00:16:31.593 "params": { 00:16:31.593 "action_on_timeout": "none", 00:16:31.593 "allow_accel_sequence": false, 00:16:31.593 "arbitration_burst": 0, 00:16:31.593 "bdev_retry_count": 3, 00:16:31.593 "ctrlr_loss_timeout_sec": 0, 00:16:31.593 "delay_cmd_submit": true, 00:16:31.593 "dhchap_dhgroups": [ 00:16:31.593 "null", 00:16:31.593 "ffdhe2048", 00:16:31.593 "ffdhe3072", 00:16:31.593 "ffdhe4096", 00:16:31.593 "ffdhe6144", 00:16:31.593 "ffdhe8192" 00:16:31.593 ], 00:16:31.593 "dhchap_digests": [ 00:16:31.593 "sha256", 00:16:31.593 "sha384", 00:16:31.593 "sha512" 00:16:31.593 ], 00:16:31.593 "disable_auto_failback": false, 00:16:31.593 "fast_io_fail_timeout_sec": 0, 00:16:31.593 "generate_uuids": false, 00:16:31.593 "high_priority_weight": 0, 00:16:31.593 "io_path_stat": false, 00:16:31.593 "io_queue_requests": 0, 00:16:31.593 "keep_alive_timeout_ms": 10000, 00:16:31.593 "low_priority_weight": 0, 00:16:31.593 "medium_priority_weight": 0, 00:16:31.593 "nvme_adminq_poll_period_us": 10000, 00:16:31.593 "nvme_error_stat": false, 00:16:31.593 "nvme_ioq_poll_period_us": 0, 00:16:31.593 "rdma_cm_event_timeout_ms": 0, 00:16:31.593 "rdma_max_cq_size": 0, 00:16:31.593 "rdma_srq_size": 0, 00:16:31.593 "reconnect_delay_sec": 0, 00:16:31.593 "timeout_admin_us": 0, 00:16:31.593 "timeout_us": 0, 00:16:31.593 "transport_ack_timeout": 0, 00:16:31.593 "transport_retry_count": 4, 00:16:31.593 "transport_tos": 0 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "method": "bdev_nvme_set_hotplug", 00:16:31.593 "params": { 00:16:31.593 "enable": false, 00:16:31.593 "period_us": 100000 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.593 "method": "bdev_malloc_create", 00:16:31.593 "params": { 00:16:31.593 "block_size": 4096, 00:16:31.593 "name": "malloc0", 00:16:31.593 "num_blocks": 8192, 00:16:31.593 "optimal_io_boundary": 0, 00:16:31.593 "physical_block_size": 4096, 00:16:31.593 "uuid": "8362e1c7-f722-40be-bf4b-3e13c8f38536" 00:16:31.593 } 00:16:31.593 }, 00:16:31.593 { 00:16:31.594 "method": "bdev_wait_for_examine" 00:16:31.594 } 00:16:31.594 ] 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "subsystem": "nbd", 00:16:31.594 "config": [] 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "subsystem": "scheduler", 00:16:31.594 "config": [ 00:16:31.594 { 00:16:31.594 "method": "framework_set_scheduler", 00:16:31.594 "params": { 00:16:31.594 "name": "static" 00:16:31.594 } 00:16:31.594 } 00:16:31.594 ] 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "subsystem": "nvmf", 00:16:31.594 "config": [ 00:16:31.594 { 00:16:31.594 "method": "nvmf_set_config", 00:16:31.594 "params": { 00:16:31.594 "admin_cmd_passthru": { 00:16:31.594 "identify_ctrlr": false 00:16:31.594 }, 00:16:31.594 "discovery_filter": "match_any" 00:16:31.594 } 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "method": "nvmf_set_max_subsystems", 00:16:31.594 "params": { 00:16:31.594 "max_subsystems": 1024 00:16:31.594 } 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "method": "nvmf_set_crdt", 00:16:31.594 "params": { 00:16:31.594 "crdt1": 0, 00:16:31.594 "crdt2": 0, 00:16:31.594 "crdt3": 0 00:16:31.594 } 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "method": "nvmf_create_transport", 00:16:31.594 "params": { 00:16:31.594 "abort_timeout_sec": 1, 00:16:31.594 "ack_timeout": 0, 00:16:31.594 "buf_cache_size": 4294967295, 00:16:31.594 "c2h_success": false, 00:16:31.594 "data_wr_pool_size": 0, 00:16:31.594 "dif_insert_or_strip": false, 00:16:31.594 "in_capsule_data_size": 4096, 00:16:31.594 "io_unit_size": 131072, 00:16:31.594 "max_aq_depth": 128, 00:16:31.594 "max_io_qpairs_per_ctrlr": 127, 00:16:31.594 "max_io_size": 131072, 00:16:31.594 "max_queue_depth": 128, 00:16:31.594 "num_shared_buffers": 511, 00:16:31.594 "sock_priority": 0, 00:16:31.594 "trtype": "TCP", 00:16:31.594 "zcopy": false 00:16:31.594 } 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "method": "nvmf_create_subsystem", 00:16:31.594 "params": { 00:16:31.594 "allow_any_host": false, 00:16:31.594 "ana_reporting": false, 00:16:31.594 "max_cntlid": 65519, 00:16:31.594 "max_namespaces": 10, 00:16:31.594 "min_cntlid": 1, 00:16:31.594 "model_number": "SPDK bdev Controller", 00:16:31.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.594 "serial_number": "SPDK00000000000001" 00:16:31.594 } 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "method": "nvmf_subsystem_add_host", 00:16:31.594 "params": { 00:16:31.594 "host": "nqn.2016-06.io.spdk:host1", 00:16:31.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.594 "psk": "/tmp/tmp.wviZjmEfTw" 00:16:31.594 } 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "method": "nvmf_subsystem_add_ns", 00:16:31.594 "params": { 00:16:31.594 "namespace": { 00:16:31.594 "bdev_name": "malloc0", 00:16:31.594 "nguid": "8362E1C7F72240BEBF4B3E13C8F38536", 00:16:31.594 "no_auto_visible": false, 00:16:31.594 "nsid": 1, 00:16:31.594 "uuid": "8362e1c7-f722-40be-bf4b-3e13c8f38536" 00:16:31.594 }, 00:16:31.594 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:31.594 } 00:16:31.594 }, 00:16:31.594 { 00:16:31.594 "method": "nvmf_subsystem_add_listener", 00:16:31.594 "params": { 00:16:31.594 "listen_address": { 00:16:31.594 "adrfam": "IPv4", 00:16:31.594 "traddr": "10.0.0.2", 00:16:31.594 "trsvcid": "4420", 00:16:31.594 "trtype": "TCP" 00:16:31.594 }, 00:16:31.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.594 "secure_channel": true 00:16:31.594 } 00:16:31.594 } 00:16:31.594 ] 00:16:31.594 } 00:16:31.594 ] 00:16:31.594 }' 00:16:31.594 19:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:31.852 19:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:31.852 "subsystems": [ 00:16:31.852 { 00:16:31.852 "subsystem": "keyring", 00:16:31.852 "config": [] 00:16:31.852 }, 00:16:31.852 { 00:16:31.852 "subsystem": "iobuf", 00:16:31.852 "config": [ 00:16:31.852 { 00:16:31.852 "method": "iobuf_set_options", 00:16:31.852 "params": { 00:16:31.852 "large_bufsize": 135168, 00:16:31.852 "large_pool_count": 1024, 00:16:31.852 "small_bufsize": 8192, 00:16:31.852 "small_pool_count": 8192 00:16:31.852 } 00:16:31.852 } 00:16:31.852 ] 00:16:31.852 }, 00:16:31.852 { 00:16:31.852 "subsystem": "sock", 00:16:31.852 "config": [ 00:16:31.852 { 00:16:31.852 "method": "sock_set_default_impl", 00:16:31.852 "params": { 00:16:31.852 "impl_name": "posix" 00:16:31.852 } 00:16:31.852 }, 00:16:31.852 { 00:16:31.852 "method": "sock_impl_set_options", 00:16:31.852 "params": { 00:16:31.852 "enable_ktls": false, 00:16:31.852 "enable_placement_id": 0, 00:16:31.852 "enable_quickack": false, 00:16:31.852 "enable_recv_pipe": true, 00:16:31.852 "enable_zerocopy_send_client": false, 00:16:31.852 "enable_zerocopy_send_server": true, 00:16:31.852 "impl_name": "ssl", 00:16:31.852 "recv_buf_size": 4096, 00:16:31.852 "send_buf_size": 4096, 00:16:31.852 "tls_version": 0, 00:16:31.852 "zerocopy_threshold": 0 00:16:31.852 } 00:16:31.852 }, 00:16:31.852 { 00:16:31.852 "method": "sock_impl_set_options", 00:16:31.852 "params": { 00:16:31.852 "enable_ktls": false, 00:16:31.852 "enable_placement_id": 0, 00:16:31.852 "enable_quickack": false, 00:16:31.852 "enable_recv_pipe": true, 00:16:31.852 "enable_zerocopy_send_client": false, 00:16:31.852 "enable_zerocopy_send_server": true, 00:16:31.852 "impl_name": "posix", 00:16:31.852 "recv_buf_size": 2097152, 00:16:31.852 "send_buf_size": 2097152, 00:16:31.852 "tls_version": 0, 00:16:31.852 "zerocopy_threshold": 0 00:16:31.852 } 00:16:31.852 } 00:16:31.852 ] 00:16:31.852 }, 00:16:31.852 { 00:16:31.852 "subsystem": "vmd", 00:16:31.852 "config": [] 00:16:31.852 }, 00:16:31.852 { 00:16:31.852 "subsystem": "accel", 00:16:31.852 "config": [ 00:16:31.852 { 00:16:31.852 "method": "accel_set_options", 00:16:31.852 "params": { 00:16:31.852 "buf_count": 2048, 00:16:31.852 "large_cache_size": 16, 00:16:31.852 "sequence_count": 2048, 00:16:31.852 "small_cache_size": 128, 00:16:31.852 "task_count": 2048 00:16:31.852 } 00:16:31.852 } 00:16:31.852 ] 00:16:31.852 }, 00:16:31.852 { 00:16:31.852 "subsystem": "bdev", 00:16:31.852 "config": [ 00:16:31.852 { 00:16:31.852 "method": "bdev_set_options", 00:16:31.852 "params": { 00:16:31.852 "bdev_auto_examine": true, 00:16:31.852 "bdev_io_cache_size": 256, 00:16:31.852 "bdev_io_pool_size": 65535, 00:16:31.852 "iobuf_large_cache_size": 16, 00:16:31.853 "iobuf_small_cache_size": 128 00:16:31.853 } 00:16:31.853 }, 00:16:31.853 { 00:16:31.853 "method": "bdev_raid_set_options", 00:16:31.853 "params": { 00:16:31.853 "process_window_size_kb": 1024 00:16:31.853 } 00:16:31.853 }, 00:16:31.853 { 00:16:31.853 "method": "bdev_iscsi_set_options", 00:16:31.853 "params": { 00:16:31.853 "timeout_sec": 30 00:16:31.853 } 00:16:31.853 }, 00:16:31.853 { 00:16:31.853 "method": "bdev_nvme_set_options", 00:16:31.853 "params": { 00:16:31.853 "action_on_timeout": "none", 00:16:31.853 "allow_accel_sequence": false, 00:16:31.853 "arbitration_burst": 0, 00:16:31.853 "bdev_retry_count": 3, 00:16:31.853 "ctrlr_loss_timeout_sec": 0, 00:16:31.853 "delay_cmd_submit": true, 00:16:31.853 "dhchap_dhgroups": [ 00:16:31.853 "null", 00:16:31.853 "ffdhe2048", 00:16:31.853 "ffdhe3072", 00:16:31.853 "ffdhe4096", 00:16:31.853 "ffdhe6144", 00:16:31.853 "ffdhe8192" 00:16:31.853 ], 00:16:31.853 "dhchap_digests": [ 00:16:31.853 "sha256", 00:16:31.853 "sha384", 00:16:31.853 "sha512" 00:16:31.853 ], 00:16:31.853 "disable_auto_failback": false, 00:16:31.853 "fast_io_fail_timeout_sec": 0, 00:16:31.853 "generate_uuids": false, 00:16:31.853 "high_priority_weight": 0, 00:16:31.853 "io_path_stat": false, 00:16:31.853 "io_queue_requests": 512, 00:16:31.853 "keep_alive_timeout_ms": 10000, 00:16:31.853 "low_priority_weight": 0, 00:16:31.853 "medium_priority_weight": 0, 00:16:31.853 "nvme_adminq_poll_period_us": 10000, 00:16:31.853 "nvme_error_stat": false, 00:16:31.853 "nvme_ioq_poll_period_us": 0, 00:16:31.853 "rdma_cm_event_timeout_ms": 0, 00:16:31.853 "rdma_max_cq_size": 0, 00:16:31.853 "rdma_srq_size": 0, 00:16:31.853 "reconnect_delay_sec": 0, 00:16:31.853 "timeout_admin_us": 0, 00:16:31.853 "timeout_us": 0, 00:16:31.853 "transport_ack_timeout": 0, 00:16:31.853 "transport_retry_count": 4, 00:16:31.853 "transport_tos": 0 00:16:31.853 } 00:16:31.853 }, 00:16:31.853 { 00:16:31.853 "method": "bdev_nvme_attach_controller", 00:16:31.853 "params": { 00:16:31.853 "adrfam": "IPv4", 00:16:31.853 "ctrlr_loss_timeout_sec": 0, 00:16:31.853 "ddgst": false, 00:16:31.853 "fast_io_fail_timeout_sec": 0, 00:16:31.853 "hdgst": false, 00:16:31.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.853 "name": "TLSTEST", 00:16:31.853 "prchk_guard": false, 00:16:31.853 "prchk_reftag": false, 00:16:31.853 "psk": "/tmp/tmp.wviZjmEfTw", 00:16:31.853 "reconnect_delay_sec": 0, 00:16:31.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.853 "traddr": "10.0.0.2", 00:16:31.853 "trsvcid": "4420", 00:16:31.853 "trtype": "TCP" 00:16:31.853 } 00:16:31.853 }, 00:16:31.853 { 00:16:31.853 "method": "bdev_nvme_set_hotplug", 00:16:31.853 "params": { 00:16:31.853 "enable": false, 00:16:31.853 "period_us": 100000 00:16:31.853 } 00:16:31.853 }, 00:16:31.853 { 00:16:31.853 "method": "bdev_wait_for_examine" 00:16:31.853 } 00:16:31.853 ] 00:16:31.853 }, 00:16:31.853 { 00:16:31.853 "subsystem": "nbd", 00:16:31.853 "config": [] 00:16:31.853 } 00:16:31.853 ] 00:16:31.853 }' 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84829 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84829 ']' 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84829 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84829 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:31.853 killing process with pid 84829 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84829' 00:16:31.853 Received shutdown signal, test time was about 10.000000 seconds 00:16:31.853 00:16:31.853 Latency(us) 00:16:31.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.853 =================================================================================================================== 00:16:31.853 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84829 00:16:31.853 [2024-07-15 19:44:57.615055] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:31.853 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84829 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84726 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84726 ']' 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84726 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84726 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:32.112 killing process with pid 84726 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84726' 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84726 00:16:32.112 [2024-07-15 19:44:57.864068] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:32.112 19:44:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84726 00:16:32.370 19:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:32.370 19:44:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:32.370 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:32.370 19:44:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:32.370 "subsystems": [ 00:16:32.370 { 00:16:32.370 "subsystem": "keyring", 00:16:32.370 "config": [] 00:16:32.370 }, 00:16:32.370 { 00:16:32.370 "subsystem": "iobuf", 00:16:32.370 "config": [ 00:16:32.370 { 00:16:32.370 "method": "iobuf_set_options", 00:16:32.370 "params": { 00:16:32.370 "large_bufsize": 135168, 00:16:32.370 "large_pool_count": 1024, 00:16:32.370 "small_bufsize": 8192, 00:16:32.370 "small_pool_count": 8192 00:16:32.370 } 00:16:32.370 } 00:16:32.370 ] 00:16:32.370 }, 00:16:32.370 { 00:16:32.370 "subsystem": "sock", 00:16:32.370 "config": [ 00:16:32.370 { 00:16:32.370 "method": "sock_set_default_impl", 00:16:32.370 "params": { 00:16:32.370 "impl_name": "posix" 00:16:32.370 } 00:16:32.370 }, 00:16:32.370 { 00:16:32.370 "method": "sock_impl_set_options", 00:16:32.370 "params": { 00:16:32.370 "enable_ktls": false, 00:16:32.370 "enable_placement_id": 0, 00:16:32.370 "enable_quickack": false, 00:16:32.370 "enable_recv_pipe": true, 00:16:32.370 "enable_zerocopy_send_client": false, 00:16:32.370 "enable_zerocopy_send_server": true, 00:16:32.370 "impl_name": "ssl", 00:16:32.370 "recv_buf_size": 4096, 00:16:32.370 "send_buf_size": 4096, 00:16:32.370 "tls_version": 0, 00:16:32.370 "zerocopy_threshold": 0 00:16:32.370 } 00:16:32.370 }, 00:16:32.370 { 00:16:32.370 "method": "sock_impl_set_options", 00:16:32.370 "params": { 00:16:32.370 "enable_ktls": false, 00:16:32.370 "enable_placement_id": 0, 00:16:32.370 "enable_quickack": false, 00:16:32.370 "enable_recv_pipe": true, 00:16:32.370 "enable_zerocopy_send_client": false, 00:16:32.370 "enable_zerocopy_send_server": true, 00:16:32.370 "impl_name": "posix", 00:16:32.370 "recv_buf_size": 2097152, 00:16:32.370 "send_buf_size": 2097152, 00:16:32.370 "tls_version": 0, 00:16:32.370 "zerocopy_threshold": 0 00:16:32.370 } 00:16:32.370 } 00:16:32.370 ] 00:16:32.370 }, 00:16:32.370 { 00:16:32.370 "subsystem": "vmd", 00:16:32.370 "config": [] 00:16:32.370 }, 00:16:32.370 { 00:16:32.370 "subsystem": "accel", 00:16:32.370 "config": [ 00:16:32.370 { 00:16:32.370 "method": "accel_set_options", 00:16:32.370 "params": { 00:16:32.370 "buf_count": 2048, 00:16:32.370 "large_cache_size": 16, 00:16:32.370 "sequence_count": 2048, 00:16:32.370 "small_cache_size": 128, 00:16:32.371 "task_count": 2048 00:16:32.371 } 00:16:32.371 } 00:16:32.371 ] 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "subsystem": "bdev", 00:16:32.371 "config": [ 00:16:32.371 { 00:16:32.371 "method": "bdev_set_options", 00:16:32.371 "params": { 00:16:32.371 "bdev_auto_examine": true, 00:16:32.371 "bdev_io_cache_size": 256, 00:16:32.371 "bdev_io_pool_size": 65535, 00:16:32.371 "iobuf_large_cache_size": 16, 00:16:32.371 "iobuf_small_cache_size": 128 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "bdev_raid_set_options", 00:16:32.371 "params": { 00:16:32.371 "process_window_size_kb": 1024 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "bdev_iscsi_set_options", 00:16:32.371 "params": { 00:16:32.371 "timeout_sec": 30 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "bdev_nvme_set_options", 00:16:32.371 "params": { 00:16:32.371 "action_on_timeout": "none", 00:16:32.371 "allow_accel_sequence": false, 00:16:32.371 "arbitration_burst": 0, 00:16:32.371 "bdev_retry_count": 3, 00:16:32.371 "ctrlr_loss_timeout_sec": 0, 00:16:32.371 "delay_cmd_submit": true, 00:16:32.371 "dhchap_dhgroups": [ 00:16:32.371 "null", 00:16:32.371 "ffdhe2048", 00:16:32.371 "ffdhe3072", 00:16:32.371 "ffdhe4096", 00:16:32.371 "ffdhe6144", 00:16:32.371 "ffdhe8192" 00:16:32.371 ], 00:16:32.371 "dhchap_digests": [ 00:16:32.371 "sha256", 00:16:32.371 "sha384", 00:16:32.371 "sha512" 00:16:32.371 ], 00:16:32.371 "disable_auto_failback": false, 00:16:32.371 "fast_io_fail_timeout_sec": 0, 00:16:32.371 "generate_uuids": false, 00:16:32.371 "high_priority_weight": 0, 00:16:32.371 "io_path_stat": false, 00:16:32.371 "io_queue_requests": 0, 00:16:32.371 "keep_alive_timeout_ms": 10000, 00:16:32.371 "low_priority_weight": 0, 00:16:32.371 "medium_priority_weight": 0, 00:16:32.371 "nvme_adminq_poll_period_us": 10000, 00:16:32.371 "nvme_error_stat": false, 00:16:32.371 "nvme_ioq_poll_period_us": 0, 00:16:32.371 "rdma_cm_event_timeout_ms": 0, 00:16:32.371 "rdma_max_cq_size": 0, 00:16:32.371 "rdma_srq_size": 0, 00:16:32.371 "reconnect_delay_sec": 0, 00:16:32.371 "timeout_admin_us": 0, 00:16:32.371 "timeout_us": 0, 00:16:32.371 "transport_ack_timeout": 0, 00:16:32.371 "transport_retry_count": 4, 00:16:32.371 "transport_tos": 0 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "bdev_nvme_set_hotplug", 00:16:32.371 "params": { 00:16:32.371 "enable": false, 00:16:32.371 "period_us": 100000 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "bdev_malloc_create", 00:16:32.371 "params": { 00:16:32.371 "block_size": 4096, 00:16:32.371 "name": "malloc0", 00:16:32.371 "num_blocks": 8192, 00:16:32.371 "optimal_io_boundary": 0, 00:16:32.371 "physical_block_size": 4096, 00:16:32.371 "uuid": "8362e1c7-f722-40be-bf4b-3e13c8f38536" 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "bdev_wait_for_examine" 00:16:32.371 } 00:16:32.371 ] 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "subsystem": "nbd", 00:16:32.371 "config": [] 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "subsystem": "scheduler", 00:16:32.371 "config": [ 00:16:32.371 { 00:16:32.371 "method": "framework_set_scheduler", 00:16:32.371 "params": { 00:16:32.371 "name": "static" 00:16:32.371 } 00:16:32.371 } 00:16:32.371 ] 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "subsystem": "nvmf", 00:16:32.371 "config": [ 00:16:32.371 { 00:16:32.371 "method": "nvmf_set_config", 00:16:32.371 "params": { 00:16:32.371 "admin_cmd_passthru": { 00:16:32.371 "identify_ctrlr": false 00:16:32.371 }, 00:16:32.371 "discovery_filter": "match_any" 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "nvmf_set_max_subsystems", 00:16:32.371 "params": { 00:16:32.371 "max_subsystems": 1024 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "nvmf_set_crdt", 00:16:32.371 "params": { 00:16:32.371 "crdt1": 0, 00:16:32.371 "crdt2": 0, 00:16:32.371 "crdt3": 0 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "nvmf_create_transport", 00:16:32.371 "params": { 00:16:32.371 "abort_timeout_sec": 1, 00:16:32.371 "ack_timeout": 0, 00:16:32.371 "buf_cache_size": 4294967295, 00:16:32.371 "c2h_success": false, 00:16:32.371 "data_wr_pool_size": 0, 00:16:32.371 "dif_insert_or_strip": false, 00:16:32.371 "in_capsule_data_size": 4096, 00:16:32.371 "io_unit_size": 131072, 00:16:32.371 "max_aq_depth": 128, 00:16:32.371 "max_io_qpairs_per_ctrlr": 127, 00:16:32.371 "max_io_size": 131072, 00:16:32.371 "max_queue_depth": 128, 00:16:32.371 "num_shared_buffers": 511, 00:16:32.371 "sock_priority": 0, 00:16:32.371 "trtype": "TCP", 00:16:32.371 "zcopy": false 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "nvmf_create_subsystem", 00:16:32.371 "params": { 00:16:32.371 "allow_any_host": false, 00:16:32.371 "ana_reporting": false, 00:16:32.371 "max_cntlid": 65519, 00:16:32.371 "max_namespaces": 10, 00:16:32.371 "min_cntlid": 1, 00:16:32.371 "model_number": "SPDK bdev Controller", 00:16:32.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.371 "serial_number": "SPDK00000000000001" 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "nvmf_subsystem_add_host", 00:16:32.371 "params": { 00:16:32.371 "host": "nqn.2016-06.io.spdk:host1", 00:16:32.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.371 "psk": "/tmp/tmp.wviZjmEfTw" 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "nvmf_subsystem_add_ns", 00:16:32.371 "params": { 00:16:32.371 "namespace": { 00:16:32.371 "bdev_name": "malloc0", 00:16:32.371 "nguid": "8362E1C7F72240BEBF4B3E13C8F38536", 00:16:32.371 "no_auto_visible": false, 00:16:32.371 "nsid": 1, 00:16:32.371 "uuid": "8362e1c7-f722-40be-bf4b-3e13c8f38536" 00:16:32.371 }, 00:16:32.371 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "method": "nvmf_subsystem_add_listener", 00:16:32.371 "params": { 00:16:32.371 "listen_address": { 00:16:32.371 "adrfam": "IPv4", 00:16:32.371 "traddr": "10.0.0.2", 00:16:32.371 "trsvcid": "4420", 00:16:32.371 "trtype": "TCP" 00:16:32.371 }, 00:16:32.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.371 "secure_channel": true 00:16:32.371 } 00:16:32.371 } 00:16:32.371 ] 00:16:32.371 } 00:16:32.371 ] 00:16:32.371 }' 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84902 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84902 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84902 ']' 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.371 19:44:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.629 [2024-07-15 19:44:58.168280] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:32.629 [2024-07-15 19:44:58.168404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.629 [2024-07-15 19:44:58.311771] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.886 [2024-07-15 19:44:58.433185] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.886 [2024-07-15 19:44:58.433259] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.886 [2024-07-15 19:44:58.433287] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.886 [2024-07-15 19:44:58.433295] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.886 [2024-07-15 19:44:58.433302] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.886 [2024-07-15 19:44:58.433380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.886 [2024-07-15 19:44:58.664940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.143 [2024-07-15 19:44:58.680869] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:33.143 [2024-07-15 19:44:58.696875] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:33.143 [2024-07-15 19:44:58.697113] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.400 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.400 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:33.400 19:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.400 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.400 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.657 19:44:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.657 19:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84946 00:16:33.657 19:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84946 /var/tmp/bdevperf.sock 00:16:33.657 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84946 ']' 00:16:33.657 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.657 19:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:33.657 19:44:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:33.657 "subsystems": [ 00:16:33.657 { 00:16:33.657 "subsystem": "keyring", 00:16:33.657 "config": [] 00:16:33.657 }, 00:16:33.657 { 00:16:33.657 "subsystem": "iobuf", 00:16:33.657 "config": [ 00:16:33.657 { 00:16:33.657 "method": "iobuf_set_options", 00:16:33.657 "params": { 00:16:33.657 "large_bufsize": 135168, 00:16:33.657 "large_pool_count": 1024, 00:16:33.657 "small_bufsize": 8192, 00:16:33.657 "small_pool_count": 8192 00:16:33.657 } 00:16:33.657 } 00:16:33.657 ] 00:16:33.657 }, 00:16:33.657 { 00:16:33.657 "subsystem": "sock", 00:16:33.657 "config": [ 00:16:33.657 { 00:16:33.657 "method": "sock_set_default_impl", 00:16:33.657 "params": { 00:16:33.657 "impl_name": "posix" 00:16:33.657 } 00:16:33.657 }, 00:16:33.657 { 00:16:33.657 "method": "sock_impl_set_options", 00:16:33.657 "params": { 00:16:33.657 "enable_ktls": false, 00:16:33.657 "enable_placement_id": 0, 00:16:33.657 "enable_quickack": false, 00:16:33.657 "enable_recv_pipe": true, 00:16:33.657 "enable_zerocopy_send_client": false, 00:16:33.657 "enable_zerocopy_send_server": true, 00:16:33.657 "impl_name": "ssl", 00:16:33.657 "recv_buf_size": 4096, 00:16:33.657 "send_buf_size": 4096, 00:16:33.657 "tls_version": 0, 00:16:33.657 "zerocopy_threshold": 0 00:16:33.657 } 00:16:33.657 }, 00:16:33.657 { 00:16:33.657 "method": "sock_impl_set_options", 00:16:33.657 "params": { 00:16:33.657 "enable_ktls": false, 00:16:33.657 "enable_placement_id": 0, 00:16:33.657 "enable_quickack": false, 00:16:33.657 "enable_recv_pipe": true, 00:16:33.657 "enable_zerocopy_send_client": false, 00:16:33.657 "enable_zerocopy_send_server": true, 00:16:33.657 "impl_name": "posix", 00:16:33.658 "recv_buf_size": 2097152, 00:16:33.658 "send_buf_size": 2097152, 00:16:33.658 "tls_version": 0, 00:16:33.658 "zerocopy_threshold": 0 00:16:33.658 } 00:16:33.658 } 00:16:33.658 ] 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "subsystem": "vmd", 00:16:33.658 "config": [] 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "subsystem": "accel", 00:16:33.658 "config": [ 00:16:33.658 { 00:16:33.658 "method": "accel_set_options", 00:16:33.658 "params": { 00:16:33.658 "buf_count": 2048, 00:16:33.658 "large_cache_size": 16, 00:16:33.658 "sequence_count": 2048, 00:16:33.658 "small_cache_size": 128, 00:16:33.658 "task_count": 2048 00:16:33.658 } 00:16:33.658 } 00:16:33.658 ] 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "subsystem": "bdev", 00:16:33.658 "config": [ 00:16:33.658 { 00:16:33.658 "method": "bdev_set_options", 00:16:33.658 "params": { 00:16:33.658 "bdev_auto_examine": true, 00:16:33.658 "bdev_io_cache_size": 256, 00:16:33.658 "bdev_io_pool_size": 65535, 00:16:33.658 "iobuf_large_cache_size": 16, 00:16:33.658 "iobuf_small_cache_size": 128 00:16:33.658 } 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "method": "bdev_raid_set_options", 00:16:33.658 "params": { 00:16:33.658 "process_window_size_kb": 1024 00:16:33.658 } 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "method": "bdev_iscsi_set_options", 00:16:33.658 "params": { 00:16:33.658 "timeout_sec": 30 00:16:33.658 } 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "method": "bdev_nvme_set_options", 00:16:33.658 "params": { 00:16:33.658 "action_on_timeout": "none", 00:16:33.658 "allow_accel_sequence": false, 00:16:33.658 "arbitration_burst": 0, 00:16:33.658 "bdev_retry_count": 3, 00:16:33.658 "ctrlr_loss_timeout_sec": 0, 00:16:33.658 "delay_cmd_submit": true, 00:16:33.658 "dhchap_dhgroups": [ 00:16:33.658 "null", 00:16:33.658 "ffdhe2048", 00:16:33.658 "ffdhe3072", 00:16:33.658 "ffdhe4096", 00:16:33.658 "ffdhe6144", 00:16:33.658 "ffdhe8192" 00:16:33.658 ], 00:16:33.658 "dhchap_digests": [ 00:16:33.658 "sha256", 00:16:33.658 "sha384", 00:16:33.658 "sha512" 00:16:33.658 ], 00:16:33.658 "disable_auto_failback": false, 00:16:33.658 "fast_io_fail_timeout_sec": 0, 00:16:33.658 "generate_uuids": false, 00:16:33.658 "high_priority_weight": 0, 00:16:33.658 "io_path_stat": false, 00:16:33.658 "io_queue_requests": 512, 00:16:33.658 "keep_alive_timeout_ms": 10000, 00:16:33.658 "low_priority_weight": 0, 00:16:33.658 "medium_priority_weight": 0, 00:16:33.658 "nvme_adminq_poll_period_us": 10000, 00:16:33.658 "nvme_error_stat": false, 00:16:33.658 "nvme_ioq_poll_period_us": 0, 00:16:33.658 "rdma_cm_event_timeout_ms": 0, 00:16:33.658 "rdma_max_cq_size": 0, 00:16:33.658 "rdma_srq_size": 0, 00:16:33.658 "reconnect_delay_sec": 0, 00:16:33.658 "timeout_admin_us": 0, 00:16:33.658 "timeout_us": 0, 00:16:33.658 "transport_ack_timeout": 0, 00:16:33.658 "transport_retry_count": 4, 00:16:33.658 "transport_tos": 0 00:16:33.658 } 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "method": "bdev_nvme_attach_controller", 00:16:33.658 "params": { 00:16:33.658 "adrfam": "IPv4", 00:16:33.658 "ctrlr_loss_timeout_sec": 0, 00:16:33.658 "ddgst": false, 00:16:33.658 "fast_io_fail_timeout_sec": 0, 00:16:33.658 "hdgst": false, 00:16:33.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.658 "name": "TLSTEST", 00:16:33.658 "prchk_guard": false, 00:16:33.658 "prchk_reftag": false, 00:16:33.658 "psk": "/tmp/tmp.wviZjmEfTw", 00:16:33.658 "reconnect_delay_sec": 0, 00:16:33.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.658 "traddr": "10.0.0.2", 00:16:33.658 "trsvcid": "4420", 00:16:33.658 "trtype": "TCP" 00:16:33.658 } 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "method": "bdev_nvme_set_hotplug", 00:16:33.658 "params": { 00:16:33.658 "enable": false, 00:16:33.658 "period_us": 100000 00:16:33.658 } 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "method": "bdev_wait_for_examine" 00:16:33.658 } 00:16:33.658 ] 00:16:33.658 }, 00:16:33.658 { 00:16:33.658 "subsystem": "nbd", 00:16:33.658 "config": [] 00:16:33.658 } 00:16:33.658 ] 00:16:33.658 }' 00:16:33.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.658 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.658 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.658 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.658 19:44:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.658 [2024-07-15 19:44:59.260556] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:33.658 [2024-07-15 19:44:59.260750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84946 ] 00:16:33.658 [2024-07-15 19:44:59.406381] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.915 [2024-07-15 19:44:59.531972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.171 [2024-07-15 19:44:59.699995] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:34.171 [2024-07-15 19:44:59.700151] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:34.734 19:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.734 19:45:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:34.734 19:45:00 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:34.734 Running I/O for 10 seconds... 00:16:44.692 00:16:44.692 Latency(us) 00:16:44.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.692 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:44.692 Verification LBA range: start 0x0 length 0x2000 00:16:44.692 TLSTESTn1 : 10.02 3847.52 15.03 0.00 0.00 33197.55 2651.23 25261.15 00:16:44.692 =================================================================================================================== 00:16:44.692 Total : 3847.52 15.03 0.00 0.00 33197.55 2651.23 25261.15 00:16:44.692 0 00:16:44.692 19:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.692 19:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84946 00:16:44.692 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84946 ']' 00:16:44.692 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84946 00:16:44.692 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:44.693 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.693 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84946 00:16:44.693 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:44.693 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:44.693 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84946' 00:16:44.693 killing process with pid 84946 00:16:44.693 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.693 00:16:44.693 Latency(us) 00:16:44.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.693 =================================================================================================================== 00:16:44.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.693 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84946 00:16:44.693 [2024-07-15 19:45:10.432949] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:44.693 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84946 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84902 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84902 ']' 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84902 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84902 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:44.951 killing process with pid 84902 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84902' 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84902 00:16:44.951 [2024-07-15 19:45:10.691561] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:44.951 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84902 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85098 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85098 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85098 ']' 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.209 19:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.209 [2024-07-15 19:45:10.978094] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:45.209 [2024-07-15 19:45:10.978225] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.466 [2024-07-15 19:45:11.116844] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.466 [2024-07-15 19:45:11.247974] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.466 [2024-07-15 19:45:11.248048] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.466 [2024-07-15 19:45:11.248075] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.466 [2024-07-15 19:45:11.248098] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.466 [2024-07-15 19:45:11.248107] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.466 [2024-07-15 19:45:11.248137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wviZjmEfTw 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wviZjmEfTw 00:16:46.398 19:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:46.399 [2024-07-15 19:45:12.176128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.656 19:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:46.914 19:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:47.172 [2024-07-15 19:45:12.720315] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.172 [2024-07-15 19:45:12.720525] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.172 19:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:47.429 malloc0 00:16:47.429 19:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:47.429 19:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wviZjmEfTw 00:16:47.686 [2024-07-15 19:45:13.424304] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85195 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85195 /var/tmp/bdevperf.sock 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85195 ']' 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.686 19:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.943 [2024-07-15 19:45:13.490446] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:47.943 [2024-07-15 19:45:13.490576] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85195 ] 00:16:47.943 [2024-07-15 19:45:13.626411] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.200 [2024-07-15 19:45:13.752383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.766 19:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.766 19:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:48.766 19:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wviZjmEfTw 00:16:49.024 19:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:49.281 [2024-07-15 19:45:14.946261] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:49.281 nvme0n1 00:16:49.281 19:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:49.540 Running I/O for 1 seconds... 00:16:50.475 00:16:50.475 Latency(us) 00:16:50.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.475 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:50.475 Verification LBA range: start 0x0 length 0x2000 00:16:50.475 nvme0n1 : 1.03 3950.74 15.43 0.00 0.00 31958.08 7566.43 19779.96 00:16:50.475 =================================================================================================================== 00:16:50.475 Total : 3950.74 15.43 0.00 0.00 31958.08 7566.43 19779.96 00:16:50.475 0 00:16:50.475 19:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85195 00:16:50.475 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85195 ']' 00:16:50.475 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85195 00:16:50.475 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:50.475 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.475 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85195 00:16:50.476 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:50.476 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:50.476 killing process with pid 85195 00:16:50.476 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85195' 00:16:50.476 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85195 00:16:50.476 Received shutdown signal, test time was about 1.000000 seconds 00:16:50.476 00:16:50.476 Latency(us) 00:16:50.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.476 =================================================================================================================== 00:16:50.476 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.476 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85195 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85098 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85098 ']' 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85098 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85098 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.734 killing process with pid 85098 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85098' 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85098 00:16:50.734 [2024-07-15 19:45:16.471822] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:50.734 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85098 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85275 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85275 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85275 ']' 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.993 19:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.993 [2024-07-15 19:45:16.764757] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:50.993 [2024-07-15 19:45:16.764896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.252 [2024-07-15 19:45:16.894338] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.252 [2024-07-15 19:45:17.005810] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.252 [2024-07-15 19:45:17.005904] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.252 [2024-07-15 19:45:17.005916] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.252 [2024-07-15 19:45:17.005924] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.252 [2024-07-15 19:45:17.005932] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.252 [2024-07-15 19:45:17.005967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.186 [2024-07-15 19:45:17.773867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.186 malloc0 00:16:52.186 [2024-07-15 19:45:17.805333] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.186 [2024-07-15 19:45:17.805597] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85326 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85326 /var/tmp/bdevperf.sock 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85326 ']' 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.186 19:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.186 [2024-07-15 19:45:17.893525] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:52.186 [2024-07-15 19:45:17.893626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85326 ] 00:16:52.444 [2024-07-15 19:45:18.035701] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.444 [2024-07-15 19:45:18.131419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.379 19:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.379 19:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:53.379 19:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wviZjmEfTw 00:16:53.638 19:45:19 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:53.897 [2024-07-15 19:45:19.421690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.897 nvme0n1 00:16:53.897 19:45:19 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.897 Running I/O for 1 seconds... 00:16:55.267 00:16:55.267 Latency(us) 00:16:55.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.267 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:55.267 Verification LBA range: start 0x0 length 0x2000 00:16:55.267 nvme0n1 : 1.03 3983.70 15.56 0.00 0.00 31750.51 11736.90 24546.21 00:16:55.267 =================================================================================================================== 00:16:55.267 Total : 3983.70 15.56 0.00 0.00 31750.51 11736.90 24546.21 00:16:55.267 0 00:16:55.267 19:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:16:55.267 19:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.267 19:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 19:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.267 19:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:16:55.267 "subsystems": [ 00:16:55.267 { 00:16:55.267 "subsystem": "keyring", 00:16:55.267 "config": [ 00:16:55.267 { 00:16:55.267 "method": "keyring_file_add_key", 00:16:55.267 "params": { 00:16:55.267 "name": "key0", 00:16:55.267 "path": "/tmp/tmp.wviZjmEfTw" 00:16:55.267 } 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "iobuf", 00:16:55.267 "config": [ 00:16:55.267 { 00:16:55.267 "method": "iobuf_set_options", 00:16:55.267 "params": { 00:16:55.267 "large_bufsize": 135168, 00:16:55.267 "large_pool_count": 1024, 00:16:55.267 "small_bufsize": 8192, 00:16:55.267 "small_pool_count": 8192 00:16:55.267 } 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "sock", 00:16:55.267 "config": [ 00:16:55.267 { 00:16:55.267 "method": "sock_set_default_impl", 00:16:55.267 "params": { 00:16:55.267 "impl_name": "posix" 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "sock_impl_set_options", 00:16:55.267 "params": { 00:16:55.267 "enable_ktls": false, 00:16:55.267 "enable_placement_id": 0, 00:16:55.267 "enable_quickack": false, 00:16:55.267 "enable_recv_pipe": true, 00:16:55.267 "enable_zerocopy_send_client": false, 00:16:55.267 "enable_zerocopy_send_server": true, 00:16:55.267 "impl_name": "ssl", 00:16:55.267 "recv_buf_size": 4096, 00:16:55.267 "send_buf_size": 4096, 00:16:55.267 "tls_version": 0, 00:16:55.267 "zerocopy_threshold": 0 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "sock_impl_set_options", 00:16:55.267 "params": { 00:16:55.267 "enable_ktls": false, 00:16:55.267 "enable_placement_id": 0, 00:16:55.267 "enable_quickack": false, 00:16:55.267 "enable_recv_pipe": true, 00:16:55.267 "enable_zerocopy_send_client": false, 00:16:55.267 "enable_zerocopy_send_server": true, 00:16:55.267 "impl_name": "posix", 00:16:55.267 "recv_buf_size": 2097152, 00:16:55.267 "send_buf_size": 2097152, 00:16:55.267 "tls_version": 0, 00:16:55.267 "zerocopy_threshold": 0 00:16:55.267 } 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "vmd", 00:16:55.267 "config": [] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "accel", 00:16:55.267 "config": [ 00:16:55.267 { 00:16:55.267 "method": "accel_set_options", 00:16:55.267 "params": { 00:16:55.267 "buf_count": 2048, 00:16:55.267 "large_cache_size": 16, 00:16:55.267 "sequence_count": 2048, 00:16:55.267 "small_cache_size": 128, 00:16:55.267 "task_count": 2048 00:16:55.267 } 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "bdev", 00:16:55.267 "config": [ 00:16:55.267 { 00:16:55.267 "method": "bdev_set_options", 00:16:55.267 "params": { 00:16:55.267 "bdev_auto_examine": true, 00:16:55.267 "bdev_io_cache_size": 256, 00:16:55.267 "bdev_io_pool_size": 65535, 00:16:55.267 "iobuf_large_cache_size": 16, 00:16:55.267 "iobuf_small_cache_size": 128 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "bdev_raid_set_options", 00:16:55.267 "params": { 00:16:55.267 "process_window_size_kb": 1024 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "bdev_iscsi_set_options", 00:16:55.267 "params": { 00:16:55.267 "timeout_sec": 30 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "bdev_nvme_set_options", 00:16:55.267 "params": { 00:16:55.267 "action_on_timeout": "none", 00:16:55.267 "allow_accel_sequence": false, 00:16:55.267 "arbitration_burst": 0, 00:16:55.267 "bdev_retry_count": 3, 00:16:55.267 "ctrlr_loss_timeout_sec": 0, 00:16:55.267 "delay_cmd_submit": true, 00:16:55.267 "dhchap_dhgroups": [ 00:16:55.267 "null", 00:16:55.267 "ffdhe2048", 00:16:55.267 "ffdhe3072", 00:16:55.267 "ffdhe4096", 00:16:55.267 "ffdhe6144", 00:16:55.267 "ffdhe8192" 00:16:55.267 ], 00:16:55.267 "dhchap_digests": [ 00:16:55.267 "sha256", 00:16:55.267 "sha384", 00:16:55.267 "sha512" 00:16:55.267 ], 00:16:55.267 "disable_auto_failback": false, 00:16:55.267 "fast_io_fail_timeout_sec": 0, 00:16:55.267 "generate_uuids": false, 00:16:55.267 "high_priority_weight": 0, 00:16:55.267 "io_path_stat": false, 00:16:55.267 "io_queue_requests": 0, 00:16:55.267 "keep_alive_timeout_ms": 10000, 00:16:55.267 "low_priority_weight": 0, 00:16:55.267 "medium_priority_weight": 0, 00:16:55.267 "nvme_adminq_poll_period_us": 10000, 00:16:55.267 "nvme_error_stat": false, 00:16:55.267 "nvme_ioq_poll_period_us": 0, 00:16:55.267 "rdma_cm_event_timeout_ms": 0, 00:16:55.267 "rdma_max_cq_size": 0, 00:16:55.267 "rdma_srq_size": 0, 00:16:55.267 "reconnect_delay_sec": 0, 00:16:55.267 "timeout_admin_us": 0, 00:16:55.267 "timeout_us": 0, 00:16:55.267 "transport_ack_timeout": 0, 00:16:55.267 "transport_retry_count": 4, 00:16:55.267 "transport_tos": 0 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "bdev_nvme_set_hotplug", 00:16:55.267 "params": { 00:16:55.267 "enable": false, 00:16:55.267 "period_us": 100000 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "bdev_malloc_create", 00:16:55.267 "params": { 00:16:55.267 "block_size": 4096, 00:16:55.267 "name": "malloc0", 00:16:55.267 "num_blocks": 8192, 00:16:55.267 "optimal_io_boundary": 0, 00:16:55.267 "physical_block_size": 4096, 00:16:55.267 "uuid": "8e161ac8-48a2-4045-8ebf-b62219b66d25" 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "bdev_wait_for_examine" 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "nbd", 00:16:55.267 "config": [] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "scheduler", 00:16:55.267 "config": [ 00:16:55.267 { 00:16:55.267 "method": "framework_set_scheduler", 00:16:55.267 "params": { 00:16:55.267 "name": "static" 00:16:55.267 } 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "subsystem": "nvmf", 00:16:55.267 "config": [ 00:16:55.267 { 00:16:55.267 "method": "nvmf_set_config", 00:16:55.267 "params": { 00:16:55.267 "admin_cmd_passthru": { 00:16:55.267 "identify_ctrlr": false 00:16:55.267 }, 00:16:55.267 "discovery_filter": "match_any" 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "nvmf_set_max_subsystems", 00:16:55.267 "params": { 00:16:55.267 "max_subsystems": 1024 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "nvmf_set_crdt", 00:16:55.267 "params": { 00:16:55.267 "crdt1": 0, 00:16:55.267 "crdt2": 0, 00:16:55.267 "crdt3": 0 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "nvmf_create_transport", 00:16:55.267 "params": { 00:16:55.267 "abort_timeout_sec": 1, 00:16:55.267 "ack_timeout": 0, 00:16:55.267 "buf_cache_size": 4294967295, 00:16:55.267 "c2h_success": false, 00:16:55.267 "data_wr_pool_size": 0, 00:16:55.267 "dif_insert_or_strip": false, 00:16:55.267 "in_capsule_data_size": 4096, 00:16:55.267 "io_unit_size": 131072, 00:16:55.267 "max_aq_depth": 128, 00:16:55.267 "max_io_qpairs_per_ctrlr": 127, 00:16:55.267 "max_io_size": 131072, 00:16:55.267 "max_queue_depth": 128, 00:16:55.267 "num_shared_buffers": 511, 00:16:55.267 "sock_priority": 0, 00:16:55.267 "trtype": "TCP", 00:16:55.267 "zcopy": false 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "nvmf_create_subsystem", 00:16:55.267 "params": { 00:16:55.267 "allow_any_host": false, 00:16:55.267 "ana_reporting": false, 00:16:55.267 "max_cntlid": 65519, 00:16:55.267 "max_namespaces": 32, 00:16:55.267 "min_cntlid": 1, 00:16:55.267 "model_number": "SPDK bdev Controller", 00:16:55.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.267 "serial_number": "00000000000000000000" 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "nvmf_subsystem_add_host", 00:16:55.267 "params": { 00:16:55.267 "host": "nqn.2016-06.io.spdk:host1", 00:16:55.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.267 "psk": "key0" 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "nvmf_subsystem_add_ns", 00:16:55.267 "params": { 00:16:55.267 "namespace": { 00:16:55.267 "bdev_name": "malloc0", 00:16:55.267 "nguid": "8E161AC848A240458EBFB62219B66D25", 00:16:55.267 "no_auto_visible": false, 00:16:55.267 "nsid": 1, 00:16:55.267 "uuid": "8e161ac8-48a2-4045-8ebf-b62219b66d25" 00:16:55.267 }, 00:16:55.267 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:55.267 } 00:16:55.267 }, 00:16:55.267 { 00:16:55.267 "method": "nvmf_subsystem_add_listener", 00:16:55.267 "params": { 00:16:55.267 "listen_address": { 00:16:55.267 "adrfam": "IPv4", 00:16:55.267 "traddr": "10.0.0.2", 00:16:55.267 "trsvcid": "4420", 00:16:55.267 "trtype": "TCP" 00:16:55.267 }, 00:16:55.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.267 "secure_channel": false, 00:16:55.267 "sock_impl": "ssl" 00:16:55.267 } 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 } 00:16:55.267 ] 00:16:55.267 }' 00:16:55.267 19:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:55.525 19:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:16:55.525 "subsystems": [ 00:16:55.525 { 00:16:55.525 "subsystem": "keyring", 00:16:55.525 "config": [ 00:16:55.525 { 00:16:55.525 "method": "keyring_file_add_key", 00:16:55.525 "params": { 00:16:55.525 "name": "key0", 00:16:55.525 "path": "/tmp/tmp.wviZjmEfTw" 00:16:55.525 } 00:16:55.525 } 00:16:55.525 ] 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "subsystem": "iobuf", 00:16:55.525 "config": [ 00:16:55.525 { 00:16:55.525 "method": "iobuf_set_options", 00:16:55.525 "params": { 00:16:55.525 "large_bufsize": 135168, 00:16:55.525 "large_pool_count": 1024, 00:16:55.525 "small_bufsize": 8192, 00:16:55.525 "small_pool_count": 8192 00:16:55.525 } 00:16:55.525 } 00:16:55.525 ] 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "subsystem": "sock", 00:16:55.525 "config": [ 00:16:55.525 { 00:16:55.525 "method": "sock_set_default_impl", 00:16:55.525 "params": { 00:16:55.525 "impl_name": "posix" 00:16:55.525 } 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "method": "sock_impl_set_options", 00:16:55.525 "params": { 00:16:55.525 "enable_ktls": false, 00:16:55.525 "enable_placement_id": 0, 00:16:55.525 "enable_quickack": false, 00:16:55.525 "enable_recv_pipe": true, 00:16:55.525 "enable_zerocopy_send_client": false, 00:16:55.525 "enable_zerocopy_send_server": true, 00:16:55.525 "impl_name": "ssl", 00:16:55.525 "recv_buf_size": 4096, 00:16:55.525 "send_buf_size": 4096, 00:16:55.525 "tls_version": 0, 00:16:55.525 "zerocopy_threshold": 0 00:16:55.525 } 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "method": "sock_impl_set_options", 00:16:55.525 "params": { 00:16:55.525 "enable_ktls": false, 00:16:55.525 "enable_placement_id": 0, 00:16:55.525 "enable_quickack": false, 00:16:55.525 "enable_recv_pipe": true, 00:16:55.525 "enable_zerocopy_send_client": false, 00:16:55.525 "enable_zerocopy_send_server": true, 00:16:55.525 "impl_name": "posix", 00:16:55.525 "recv_buf_size": 2097152, 00:16:55.525 "send_buf_size": 2097152, 00:16:55.525 "tls_version": 0, 00:16:55.525 "zerocopy_threshold": 0 00:16:55.525 } 00:16:55.525 } 00:16:55.525 ] 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "subsystem": "vmd", 00:16:55.525 "config": [] 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "subsystem": "accel", 00:16:55.525 "config": [ 00:16:55.525 { 00:16:55.525 "method": "accel_set_options", 00:16:55.525 "params": { 00:16:55.525 "buf_count": 2048, 00:16:55.525 "large_cache_size": 16, 00:16:55.525 "sequence_count": 2048, 00:16:55.525 "small_cache_size": 128, 00:16:55.525 "task_count": 2048 00:16:55.525 } 00:16:55.525 } 00:16:55.525 ] 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "subsystem": "bdev", 00:16:55.525 "config": [ 00:16:55.525 { 00:16:55.525 "method": "bdev_set_options", 00:16:55.525 "params": { 00:16:55.525 "bdev_auto_examine": true, 00:16:55.525 "bdev_io_cache_size": 256, 00:16:55.525 "bdev_io_pool_size": 65535, 00:16:55.525 "iobuf_large_cache_size": 16, 00:16:55.525 "iobuf_small_cache_size": 128 00:16:55.525 } 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "method": "bdev_raid_set_options", 00:16:55.525 "params": { 00:16:55.525 "process_window_size_kb": 1024 00:16:55.525 } 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "method": "bdev_iscsi_set_options", 00:16:55.525 "params": { 00:16:55.525 "timeout_sec": 30 00:16:55.525 } 00:16:55.525 }, 00:16:55.525 { 00:16:55.525 "method": "bdev_nvme_set_options", 00:16:55.525 "params": { 00:16:55.525 "action_on_timeout": "none", 00:16:55.525 "allow_accel_sequence": false, 00:16:55.525 "arbitration_burst": 0, 00:16:55.525 "bdev_retry_count": 3, 00:16:55.525 "ctrlr_loss_timeout_sec": 0, 00:16:55.525 "delay_cmd_submit": true, 00:16:55.525 "dhchap_dhgroups": [ 00:16:55.525 "null", 00:16:55.525 "ffdhe2048", 00:16:55.525 "ffdhe3072", 00:16:55.525 "ffdhe4096", 00:16:55.525 "ffdhe6144", 00:16:55.525 "ffdhe8192" 00:16:55.525 ], 00:16:55.525 "dhchap_digests": [ 00:16:55.525 "sha256", 00:16:55.525 "sha384", 00:16:55.525 "sha512" 00:16:55.525 ], 00:16:55.525 "disable_auto_failback": false, 00:16:55.525 "fast_io_fail_timeout_sec": 0, 00:16:55.525 "generate_uuids": false, 00:16:55.525 "high_priority_weight": 0, 00:16:55.525 "io_path_stat": false, 00:16:55.525 "io_queue_requests": 512, 00:16:55.525 "keep_alive_timeout_ms": 10000, 00:16:55.526 "low_priority_weight": 0, 00:16:55.526 "medium_priority_weight": 0, 00:16:55.526 "nvme_adminq_poll_period_us": 10000, 00:16:55.526 "nvme_error_stat": false, 00:16:55.526 "nvme_ioq_poll_period_us": 0, 00:16:55.526 "rdma_cm_event_timeout_ms": 0, 00:16:55.526 "rdma_max_cq_size": 0, 00:16:55.526 "rdma_srq_size": 0, 00:16:55.526 "reconnect_delay_sec": 0, 00:16:55.526 "timeout_admin_us": 0, 00:16:55.526 "timeout_us": 0, 00:16:55.526 "transport_ack_timeout": 0, 00:16:55.526 "transport_retry_count": 4, 00:16:55.526 "transport_tos": 0 00:16:55.526 } 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "method": "bdev_nvme_attach_controller", 00:16:55.526 "params": { 00:16:55.526 "adrfam": "IPv4", 00:16:55.526 "ctrlr_loss_timeout_sec": 0, 00:16:55.526 "ddgst": false, 00:16:55.526 "fast_io_fail_timeout_sec": 0, 00:16:55.526 "hdgst": false, 00:16:55.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.526 "name": "nvme0", 00:16:55.526 "prchk_guard": false, 00:16:55.526 "prchk_reftag": false, 00:16:55.526 "psk": "key0", 00:16:55.526 "reconnect_delay_sec": 0, 00:16:55.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.526 "traddr": "10.0.0.2", 00:16:55.526 "trsvcid": "4420", 00:16:55.526 "trtype": "TCP" 00:16:55.526 } 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "method": "bdev_nvme_set_hotplug", 00:16:55.526 "params": { 00:16:55.526 "enable": false, 00:16:55.526 "period_us": 100000 00:16:55.526 } 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "method": "bdev_enable_histogram", 00:16:55.526 "params": { 00:16:55.526 "enable": true, 00:16:55.526 "name": "nvme0n1" 00:16:55.526 } 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "method": "bdev_wait_for_examine" 00:16:55.526 } 00:16:55.526 ] 00:16:55.526 }, 00:16:55.526 { 00:16:55.526 "subsystem": "nbd", 00:16:55.526 "config": [] 00:16:55.526 } 00:16:55.526 ] 00:16:55.526 }' 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 85326 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85326 ']' 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85326 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85326 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:55.526 killing process with pid 85326 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85326' 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85326 00:16:55.526 Received shutdown signal, test time was about 1.000000 seconds 00:16:55.526 00:16:55.526 Latency(us) 00:16:55.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.526 =================================================================================================================== 00:16:55.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.526 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85326 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 85275 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85275 ']' 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85275 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85275 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:55.783 killing process with pid 85275 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85275' 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85275 00:16:55.783 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85275 00:16:56.041 19:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:16:56.041 19:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.041 19:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:16:56.041 "subsystems": [ 00:16:56.041 { 00:16:56.041 "subsystem": "keyring", 00:16:56.041 "config": [ 00:16:56.041 { 00:16:56.041 "method": "keyring_file_add_key", 00:16:56.041 "params": { 00:16:56.041 "name": "key0", 00:16:56.041 "path": "/tmp/tmp.wviZjmEfTw" 00:16:56.041 } 00:16:56.041 } 00:16:56.041 ] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "iobuf", 00:16:56.041 "config": [ 00:16:56.041 { 00:16:56.041 "method": "iobuf_set_options", 00:16:56.041 "params": { 00:16:56.041 "large_bufsize": 135168, 00:16:56.041 "large_pool_count": 1024, 00:16:56.041 "small_bufsize": 8192, 00:16:56.041 "small_pool_count": 8192 00:16:56.041 } 00:16:56.041 } 00:16:56.041 ] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "sock", 00:16:56.041 "config": [ 00:16:56.041 { 00:16:56.041 "method": "sock_set_default_impl", 00:16:56.041 "params": { 00:16:56.041 "impl_name": "posix" 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "sock_impl_set_options", 00:16:56.041 "params": { 00:16:56.041 "enable_ktls": false, 00:16:56.041 "enable_placement_id": 0, 00:16:56.041 "enable_quickack": false, 00:16:56.041 "enable_recv_pipe": true, 00:16:56.041 "enable_zerocopy_send_client": false, 00:16:56.041 "enable_zerocopy_send_server": true, 00:16:56.041 "impl_name": "ssl", 00:16:56.041 "recv_buf_size": 4096, 00:16:56.041 "send_buf_size": 4096, 00:16:56.041 "tls_version": 0, 00:16:56.041 "zerocopy_threshold": 0 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "sock_impl_set_options", 00:16:56.041 "params": { 00:16:56.041 "enable_ktls": false, 00:16:56.041 "enable_placement_id": 0, 00:16:56.041 "enable_quickack": false, 00:16:56.041 "enable_recv_pipe": true, 00:16:56.041 "enable_zerocopy_send_client": false, 00:16:56.041 "enable_zerocopy_send_server": true, 00:16:56.041 "impl_name": "posix", 00:16:56.041 "recv_buf_size": 2097152, 00:16:56.041 "send_buf_size": 2097152, 00:16:56.041 "tls_version": 0, 00:16:56.041 "zerocopy_threshold": 0 00:16:56.041 } 00:16:56.041 } 00:16:56.041 ] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "vmd", 00:16:56.041 "config": [] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "accel", 00:16:56.041 "config": [ 00:16:56.041 { 00:16:56.041 "method": "accel_set_options", 00:16:56.041 "params": { 00:16:56.041 "buf_count": 2048, 00:16:56.041 "large_cache_size": 16, 00:16:56.041 "sequence_count": 2048, 00:16:56.041 "small_cache_size": 128, 00:16:56.041 "task_count": 2048 00:16:56.041 } 00:16:56.041 } 00:16:56.041 ] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "bdev", 00:16:56.041 "config": [ 00:16:56.041 { 00:16:56.041 "method": "bdev_set_options", 00:16:56.041 "params": { 00:16:56.041 "bdev_auto_examine": true, 00:16:56.041 "bdev_io_cache_size": 256, 00:16:56.041 "bdev_io_pool_size": 65535, 00:16:56.041 "iobuf_large_cache_size": 16, 00:16:56.041 "iobuf_small_cache_size": 128 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "bdev_raid_set_options", 00:16:56.041 "params": { 00:16:56.041 "process_window_size_kb": 1024 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "bdev_iscsi_set_options", 00:16:56.041 "params": { 00:16:56.041 "timeout_sec": 30 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "bdev_nvme_set_options", 00:16:56.041 "params": { 00:16:56.041 "action_on_timeout": "none", 00:16:56.041 "allow_accel_sequence": false, 00:16:56.041 "arbitration_burst": 0, 00:16:56.041 "bdev_retry_count": 3, 00:16:56.041 "ctrlr_loss_timeout_sec": 0, 00:16:56.041 "delay_cmd_submit": true, 00:16:56.041 "dhchap_dhgroups": [ 00:16:56.041 "null", 00:16:56.041 "ffdhe2048", 00:16:56.041 "ffdhe3072", 00:16:56.041 "ffdhe4096", 00:16:56.041 "ffdhe6144", 00:16:56.041 "ffdhe8192" 00:16:56.041 ], 00:16:56.041 "dhchap_digests": [ 00:16:56.041 "sha256", 00:16:56.041 "sha384", 00:16:56.041 "sha512" 00:16:56.041 ], 00:16:56.041 "disable_auto_failback": false, 00:16:56.041 "fast_io_fail_timeout_sec": 0, 00:16:56.041 "generate_uuids": false, 00:16:56.041 "high_priority_weight": 0, 00:16:56.041 "io_path_stat": false, 00:16:56.041 "io_queue_requests": 0, 00:16:56.041 "keep_alive_timeout_ms": 10000, 00:16:56.041 "low_priority_weight": 0, 00:16:56.041 "medium_priority_weight": 0, 00:16:56.041 "nvme_adminq_poll_period_us": 10000, 00:16:56.041 "nvme_error_stat": false, 00:16:56.041 "nvme_ioq_poll_period_us": 0, 00:16:56.041 "rdma_cm_event_timeout_ms": 0, 00:16:56.041 "rdma_max_cq_size": 0, 00:16:56.041 "rdma_srq_size": 0, 00:16:56.041 "reconnect_delay_sec": 0, 00:16:56.041 "timeout_admin_us": 0, 00:16:56.041 "timeout_us": 0, 00:16:56.041 "transport_ack_timeout": 0, 00:16:56.041 "transport_retry_count": 4, 00:16:56.041 "transport_tos": 0 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "bdev_nvme_set_hotplug", 00:16:56.041 "params": { 00:16:56.041 "enable": false, 00:16:56.041 "period_us": 100000 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "bdev_malloc_create", 00:16:56.041 "params": { 00:16:56.041 "block_size": 4096, 00:16:56.041 "name": "malloc0", 00:16:56.041 "num_blocks": 8192, 00:16:56.041 "optimal_io_boundary": 0, 00:16:56.041 "physical_block_size": 4096, 00:16:56.041 "uuid": "8e161ac8-48a2-4045-8ebf-b62219b66d25" 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "bdev_wait_for_examine" 00:16:56.041 } 00:16:56.041 ] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "nbd", 00:16:56.041 "config": [] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "scheduler", 00:16:56.041 "config": [ 00:16:56.041 { 00:16:56.041 "method": "framework_set_scheduler", 00:16:56.041 "params": { 00:16:56.041 "name": "static" 00:16:56.041 } 00:16:56.041 } 00:16:56.041 ] 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "subsystem": "nvmf", 00:16:56.041 "config": [ 00:16:56.041 { 00:16:56.041 "method": "nvmf_set_config", 00:16:56.041 "params": { 00:16:56.041 "admin_cmd_passthru": { 00:16:56.041 "identify_ctrlr": false 00:16:56.041 }, 00:16:56.041 "discovery_filter": "match_any" 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "nvmf_set_max_subsystems", 00:16:56.041 "params": { 00:16:56.041 "max_subsystems": 1024 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.041 "method": "nvmf_set_crdt", 00:16:56.041 "params": { 00:16:56.041 "crdt1": 0, 00:16:56.041 "crdt2": 0, 00:16:56.041 "crdt3": 0 00:16:56.041 } 00:16:56.041 }, 00:16:56.041 { 00:16:56.042 "method": "nvmf_create_transport", 00:16:56.042 "params": { 00:16:56.042 "abort_timeout_sec": 1, 00:16:56.042 "ack_timeout": 0, 00:16:56.042 "buf_cache_size": 4294967295, 00:16:56.042 "c2h_success": false, 00:16:56.042 "data_wr_pool_size": 0, 00:16:56.042 "dif_insert_or_strip": false, 00:16:56.042 "in_capsule_data_size": 4096, 00:16:56.042 "io_unit_size": 131072, 00:16:56.042 "max_aq_depth": 128, 00:16:56.042 "max_io_qpairs_per_ctrlr": 127, 00:16:56.042 "max_io_size": 131072, 00:16:56.042 "max_queue_depth": 128, 00:16:56.042 "num_shared_buffers": 511, 00:16:56.042 "sock_priority": 0, 00:16:56.042 "trtype": "TCP", 00:16:56.042 "zcopy": false 00:16:56.042 } 00:16:56.042 }, 00:16:56.042 { 00:16:56.042 "method": "nvmf_create_subsystem", 00:16:56.042 "params": { 00:16:56.042 "allow_any_host": false, 00:16:56.042 "ana_reporting": false, 00:16:56.042 "max_cntlid": 65519, 00:16:56.042 "max_namespaces": 32, 00:16:56.042 "min_cntlid": 1, 00:16:56.042 "model_number": "SPDK bdev Controller", 00:16:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.042 "serial_number": "00000000000000000000" 00:16:56.042 } 00:16:56.042 }, 00:16:56.042 { 00:16:56.042 "method": "nvmf_subsystem_add_host", 00:16:56.042 "params": { 00:16:56.042 "host": "nqn.2016-06.io.spdk:host1", 00:16:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.042 "psk": "key0" 00:16:56.042 } 00:16:56.042 }, 00:16:56.042 { 00:16:56.042 "method": "nvmf_subsystem_add_ns", 00:16:56.042 "params": { 00:16:56.042 "namespace": { 00:16:56.042 "bdev_name": "malloc0", 00:16:56.042 "nguid": "8E161AC848A240458EBFB62219B66D25", 00:16:56.042 "no_auto_visible": false, 00:16:56.042 "nsid": 1, 00:16:56.042 "uuid": "8e161ac8-48a2-4045-8ebf-b62219b66d25" 00:16:56.042 }, 00:16:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:56.042 } 00:16:56.042 }, 00:16:56.042 { 00:16:56.042 "method": "nvmf_subsystem_add_listener", 00:16:56.042 "params": { 00:16:56.042 "listen_address": { 00:16:56.042 "adrfam": "IPv4", 00:16:56.042 "traddr": "10.0.0.2", 00:16:56.042 "trsvcid": "4420", 00:16:56.042 "trtype": "TCP" 00:16:56.042 }, 00:16:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.042 "secure_channel": false, 00:16:56.042 "sock_impl": "ssl" 00:16:56.042 } 00:16:56.042 } 00:16:56.042 ] 00:16:56.042 } 00:16:56.042 ] 00:16:56.042 }' 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85411 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85411 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85411 ']' 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.042 19:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.042 [2024-07-15 19:45:21.726110] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:56.042 [2024-07-15 19:45:21.726240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.315 [2024-07-15 19:45:21.864133] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.315 [2024-07-15 19:45:21.976207] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.315 [2024-07-15 19:45:21.976264] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.315 [2024-07-15 19:45:21.976276] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.315 [2024-07-15 19:45:21.976285] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.315 [2024-07-15 19:45:21.976293] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.315 [2024-07-15 19:45:21.976378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.572 [2024-07-15 19:45:22.220913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.572 [2024-07-15 19:45:22.252864] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:56.572 [2024-07-15 19:45:22.253111] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=85456 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 85456 /var/tmp/bdevperf.sock 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85456 ']' 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:16:57.138 "subsystems": [ 00:16:57.138 { 00:16:57.138 "subsystem": "keyring", 00:16:57.138 "config": [ 00:16:57.138 { 00:16:57.138 "method": "keyring_file_add_key", 00:16:57.138 "params": { 00:16:57.138 "name": "key0", 00:16:57.138 "path": "/tmp/tmp.wviZjmEfTw" 00:16:57.138 } 00:16:57.138 } 00:16:57.138 ] 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "subsystem": "iobuf", 00:16:57.138 "config": [ 00:16:57.138 { 00:16:57.138 "method": "iobuf_set_options", 00:16:57.138 "params": { 00:16:57.138 "large_bufsize": 135168, 00:16:57.138 "large_pool_count": 1024, 00:16:57.138 "small_bufsize": 8192, 00:16:57.138 "small_pool_count": 8192 00:16:57.138 } 00:16:57.138 } 00:16:57.138 ] 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "subsystem": "sock", 00:16:57.138 "config": [ 00:16:57.138 { 00:16:57.138 "method": "sock_set_default_impl", 00:16:57.138 "params": { 00:16:57.138 "impl_name": "posix" 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "sock_impl_set_options", 00:16:57.138 "params": { 00:16:57.138 "enable_ktls": false, 00:16:57.138 "enable_placement_id": 0, 00:16:57.138 "enable_quickack": false, 00:16:57.138 "enable_recv_pipe": true, 00:16:57.138 "enable_zerocopy_send_client": false, 00:16:57.138 "enable_zerocopy_send_server": true, 00:16:57.138 "impl_name": "ssl", 00:16:57.138 "recv_buf_size": 4096, 00:16:57.138 "send_buf_size": 4096, 00:16:57.138 "tls_version": 0, 00:16:57.138 "zerocopy_threshold": 0 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "sock_impl_set_options", 00:16:57.138 "params": { 00:16:57.138 "enable_ktls": false, 00:16:57.138 "enable_placement_id": 0, 00:16:57.138 "enable_quickack": false, 00:16:57.138 "enable_recv_pipe": true, 00:16:57.138 "enable_zerocopy_send_client": false, 00:16:57.138 "enable_zerocopy_send_server": true, 00:16:57.138 "impl_name": "posix", 00:16:57.138 "recv_buf_size": 2097152, 00:16:57.138 "send_buf_size": 2097152, 00:16:57.138 "tls_version": 0, 00:16:57.138 "zerocopy_threshold": 0 00:16:57.138 } 00:16:57.138 } 00:16:57.138 ] 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "subsystem": "vmd", 00:16:57.138 "config": [] 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "subsystem": "accel", 00:16:57.138 "config": [ 00:16:57.138 { 00:16:57.138 "method": "accel_set_options", 00:16:57.138 "params": { 00:16:57.138 "buf_count": 2048, 00:16:57.138 "large_cache_size": 16, 00:16:57.138 "sequence_count": 2048, 00:16:57.138 "small_cache_size": 128, 00:16:57.138 "task_count": 2048 00:16:57.138 } 00:16:57.138 } 00:16:57.138 ] 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "subsystem": "bdev", 00:16:57.138 "config": [ 00:16:57.138 { 00:16:57.138 "method": "bdev_set_options", 00:16:57.138 "params": { 00:16:57.138 "bdev_auto_examine": true, 00:16:57.138 "bdev_io_cache_size": 256, 00:16:57.138 "bdev_io_pool_size": 65535, 00:16:57.138 "iobuf_large_cache_size": 16, 00:16:57.138 "iobuf_small_cache_size": 128 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "bdev_raid_set_options", 00:16:57.138 "params": { 00:16:57.138 "process_window_size_kb": 1024 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "bdev_iscsi_set_options", 00:16:57.138 "params": { 00:16:57.138 "timeout_sec": 30 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "bdev_nvme_set_options", 00:16:57.138 "params": { 00:16:57.138 "action_on_timeout": "none", 00:16:57.138 "allow_accel_sequence": false, 00:16:57.138 "arbitration_burst": 0, 00:16:57.138 "bdev_retry_count": 3, 00:16:57.138 "ctrlr_loss_timeout_sec": 0, 00:16:57.138 "delay_cmd_submit": true, 00:16:57.138 "dhchap_dhgroups": [ 00:16:57.138 "null", 00:16:57.138 "ffdhe2048", 00:16:57.138 "ffdhe3072", 00:16:57.138 "ffdhe4096", 00:16:57.138 "ffdhe6144", 00:16:57.138 "ffdhe8192" 00:16:57.138 ], 00:16:57.138 "dhchap_digests": [ 00:16:57.138 "sha256", 00:16:57.138 "sha384", 00:16:57.138 "sha512" 00:16:57.138 ], 00:16:57.138 "disable_auto_failback": false, 00:16:57.138 "fast_io_fail_timeout_sec": 0, 00:16:57.138 "generate_uuids": false, 00:16:57.138 "high_priority_weight": 0, 00:16:57.138 "io_path_stat": false, 00:16:57.138 "io_queue_requests": 512, 00:16:57.138 "keep_alive_timeout_ms": 10000, 00:16:57.138 "low_priority_weight": 0, 00:16:57.138 "medium_priority_weight": 0, 00:16:57.138 "nvme_adminq_poll_period_us": 10000, 00:16:57.138 "nvme_error_stat": false, 00:16:57.138 "nvme_ioq_poll_period_us": 0, 00:16:57.138 "rdma_cm_event_timeout_ms": 0, 00:16:57.138 "rdma_max_cq_size": 0, 00:16:57.138 "rdma_srq_size": 0, 00:16:57.138 "reconnect_delay_sec": 0, 00:16:57.138 "timeout_admin_us": 0, 00:16:57.138 "timeout_us": 0, 00:16:57.138 "transport_ack_timeout": 0, 00:16:57.138 "transport_retry_count": 4, 00:16:57.138 "transport_tos": 0 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "bdev_nvme_attach_controller", 00:16:57.138 "params": { 00:16:57.138 "adrfam": "IPv4", 00:16:57.138 "ctrlr_loss_timeout_sec": 0, 00:16:57.138 "ddgst": false, 00:16:57.138 "fast_io_fail_timeout_sec": 0, 00:16:57.138 "hdgst": false, 00:16:57.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.138 "name": "nvme0", 00:16:57.138 "prchk_guard": false, 00:16:57.138 "prchk_reftag": false, 00:16:57.138 "psk": "key0", 00:16:57.138 "reconnect_delay_sec": 0, 00:16:57.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.138 "traddr": "10.0.0.2", 00:16:57.138 "trsvcid": "4420", 00:16:57.138 "trtype": "TCP" 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "bdev_nvme_set_hotplug", 00:16:57.138 "params": { 00:16:57.138 "enable": false, 00:16:57.138 "period_us": 100000 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "bdev_enable_histogram", 00:16:57.138 "params": { 00:16:57.138 "enable": true, 00:16:57.138 "name": "nvme0n1" 00:16:57.138 } 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "method": "bdev_wait_for_examine" 00:16:57.138 } 00:16:57.138 ] 00:16:57.138 }, 00:16:57.138 { 00:16:57.138 "subsystem": "nbd", 00:16:57.138 "config": [] 00:16:57.138 } 00:16:57.138 ] 00:16:57.138 }' 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.138 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.139 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.139 19:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.139 [2024-07-15 19:45:22.809866] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:16:57.139 [2024-07-15 19:45:22.809976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85456 ] 00:16:57.396 [2024-07-15 19:45:22.950304] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.396 [2024-07-15 19:45:23.073874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.652 [2024-07-15 19:45:23.249649] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.216 19:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.216 19:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:58.216 19:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:58.216 19:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:16:58.474 19:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.474 19:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:58.474 Running I/O for 1 seconds... 00:16:59.408 00:16:59.408 Latency(us) 00:16:59.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.408 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:59.408 Verification LBA range: start 0x0 length 0x2000 00:16:59.408 nvme0n1 : 1.03 3921.39 15.32 0.00 0.00 32106.67 8400.52 23950.43 00:16:59.408 =================================================================================================================== 00:16:59.408 Total : 3921.39 15.32 0.00 0.00 32106.67 8400.52 23950.43 00:16:59.408 0 00:16:59.408 19:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:16:59.408 19:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:16:59.408 19:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:59.408 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:16:59.408 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:16:59.409 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:59.409 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:59.409 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:59.409 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:59.409 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:59.409 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:59.409 nvmf_trace.0 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85456 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85456 ']' 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85456 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85456 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85456' 00:16:59.667 killing process with pid 85456 00:16:59.667 Received shutdown signal, test time was about 1.000000 seconds 00:16:59.667 00:16:59.667 Latency(us) 00:16:59.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.667 =================================================================================================================== 00:16:59.667 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85456 00:16:59.667 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85456 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:59.925 rmmod nvme_tcp 00:16:59.925 rmmod nvme_fabrics 00:16:59.925 rmmod nvme_keyring 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85411 ']' 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85411 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85411 ']' 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85411 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85411 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:59.925 killing process with pid 85411 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85411' 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85411 00:16:59.925 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85411 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.183 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.441 19:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:00.441 19:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PV0nJogWFm /tmp/tmp.2PCMcxUtKY /tmp/tmp.wviZjmEfTw 00:17:00.441 00:17:00.441 real 1m27.767s 00:17:00.441 user 2m19.617s 00:17:00.441 sys 0m28.318s 00:17:00.441 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.441 19:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.441 ************************************ 00:17:00.441 END TEST nvmf_tls 00:17:00.441 ************************************ 00:17:00.441 19:45:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:00.441 19:45:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:00.441 19:45:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.441 19:45:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.441 19:45:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.441 ************************************ 00:17:00.441 START TEST nvmf_fips 00:17:00.441 ************************************ 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:00.441 * Looking for test storage... 00:17:00.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:00.441 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:00.442 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:00.700 Error setting digest 00:17:00.700 0082D341A57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:00.700 0082D341A57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:00.700 Cannot find device "nvmf_tgt_br" 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.700 Cannot find device "nvmf_tgt_br2" 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:00.700 Cannot find device "nvmf_tgt_br" 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:00.700 Cannot find device "nvmf_tgt_br2" 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:00.700 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:00.974 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:00.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:17:00.975 00:17:00.975 --- 10.0.0.2 ping statistics --- 00:17:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.975 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:00.975 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:00.975 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:00.975 00:17:00.975 --- 10.0.0.3 ping statistics --- 00:17:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.975 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:00.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:00.975 00:17:00.975 --- 10.0.0.1 ping statistics --- 00:17:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.975 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:00.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85746 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85746 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85746 ']' 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.975 19:45:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:01.246 [2024-07-15 19:45:26.764011] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:01.246 [2024-07-15 19:45:26.764108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.246 [2024-07-15 19:45:26.906434] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.504 [2024-07-15 19:45:27.040483] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.504 [2024-07-15 19:45:27.040598] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.504 [2024-07-15 19:45:27.040612] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.504 [2024-07-15 19:45:27.040624] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.504 [2024-07-15 19:45:27.040633] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.504 [2024-07-15 19:45:27.040672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:02.070 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:02.071 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:02.071 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:02.071 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:02.071 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:02.071 19:45:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:02.329 [2024-07-15 19:45:28.040557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.329 [2024-07-15 19:45:28.056479] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:02.329 [2024-07-15 19:45:28.056688] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.329 [2024-07-15 19:45:28.087415] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:02.329 malloc0 00:17:02.586 19:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.586 19:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85805 00:17:02.586 19:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85805 /var/tmp/bdevperf.sock 00:17:02.586 19:45:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:02.586 19:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85805 ']' 00:17:02.586 19:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.586 19:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.587 19:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.587 19:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.587 19:45:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:02.587 [2024-07-15 19:45:28.180803] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:02.587 [2024-07-15 19:45:28.180928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85805 ] 00:17:02.587 [2024-07-15 19:45:28.315569] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.844 [2024-07-15 19:45:28.429735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.409 19:45:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.409 19:45:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:03.409 19:45:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:03.669 [2024-07-15 19:45:29.440894] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.669 [2024-07-15 19:45:29.441039] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:03.927 TLSTESTn1 00:17:03.927 19:45:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:03.927 Running I/O for 10 seconds... 00:17:16.119 00:17:16.119 Latency(us) 00:17:16.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:16.119 Verification LBA range: start 0x0 length 0x2000 00:17:16.119 TLSTESTn1 : 10.03 3840.20 15.00 0.00 0.00 33258.34 11915.64 35508.60 00:17:16.119 =================================================================================================================== 00:17:16.119 Total : 3840.20 15.00 0.00 0.00 33258.34 11915.64 35508.60 00:17:16.119 0 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:16.119 nvmf_trace.0 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85805 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85805 ']' 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85805 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85805 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:16.119 killing process with pid 85805 00:17:16.119 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.119 00:17:16.119 Latency(us) 00:17:16.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.119 =================================================================================================================== 00:17:16.119 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85805' 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85805 00:17:16.119 [2024-07-15 19:45:39.836941] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:16.119 19:45:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85805 00:17:16.119 19:45:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:16.119 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.119 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:16.119 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.119 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:16.119 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.119 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.119 rmmod nvme_tcp 00:17:16.120 rmmod nvme_fabrics 00:17:16.120 rmmod nvme_keyring 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85746 ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85746 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85746 ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85746 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85746 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:16.120 killing process with pid 85746 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85746' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85746 00:17:16.120 [2024-07-15 19:45:40.190802] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85746 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:16.120 00:17:16.120 real 0m14.430s 00:17:16.120 user 0m19.335s 00:17:16.120 sys 0m6.063s 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.120 19:45:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:16.120 ************************************ 00:17:16.120 END TEST nvmf_fips 00:17:16.120 ************************************ 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.120 19:45:40 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:17:16.120 19:45:40 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:17:16.120 19:45:40 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.120 19:45:40 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.120 19:45:40 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:17:16.120 19:45:40 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.120 19:45:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.120 ************************************ 00:17:16.120 START TEST nvmf_multicontroller 00:17:16.120 ************************************ 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:16.120 * Looking for test storage... 00:17:16.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.120 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:16.121 Cannot find device "nvmf_tgt_br" 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.121 Cannot find device "nvmf_tgt_br2" 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:16.121 Cannot find device "nvmf_tgt_br" 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:16.121 Cannot find device "nvmf_tgt_br2" 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:16.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:16.121 00:17:16.121 --- 10.0.0.2 ping statistics --- 00:17:16.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.121 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:16.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:16.121 00:17:16.121 --- 10.0.0.3 ping statistics --- 00:17:16.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.121 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:16.121 00:17:16.121 --- 10.0.0.1 ping statistics --- 00:17:16.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.121 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:16.121 19:45:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=86180 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 86180 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86180 ']' 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.121 19:45:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.121 [2024-07-15 19:45:41.086561] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:16.121 [2024-07-15 19:45:41.086703] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.121 [2024-07-15 19:45:41.229889] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:16.121 [2024-07-15 19:45:41.359835] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.121 [2024-07-15 19:45:41.359918] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.121 [2024-07-15 19:45:41.359945] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.121 [2024-07-15 19:45:41.359956] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.121 [2024-07-15 19:45:41.359965] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.121 [2024-07-15 19:45:41.360355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.121 [2024-07-15 19:45:41.360622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.121 [2024-07-15 19:45:41.360629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.378 [2024-07-15 19:45:42.143033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.378 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 Malloc0 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 [2024-07-15 19:45:42.204387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 [2024-07-15 19:45:42.212276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 Malloc1 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86238 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86238 /var/tmp/bdevperf.sock 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86238 ']' 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.636 19:45:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.568 NVMe0n1 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.568 1 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.568 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 2024/07/15 19:45:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:17.826 request: 00:17:17.826 { 00:17:17.826 "method": "bdev_nvme_attach_controller", 00:17:17.826 "params": { 00:17:17.826 "name": "NVMe0", 00:17:17.826 "trtype": "tcp", 00:17:17.826 "traddr": "10.0.0.2", 00:17:17.826 "adrfam": "ipv4", 00:17:17.826 "trsvcid": "4420", 00:17:17.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.826 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:17.826 "hostaddr": "10.0.0.2", 00:17:17.826 "hostsvcid": "60000", 00:17:17.826 "prchk_reftag": false, 00:17:17.826 "prchk_guard": false, 00:17:17.826 "hdgst": false, 00:17:17.826 "ddgst": false 00:17:17.826 } 00:17:17.826 } 00:17:17.826 Got JSON-RPC error response 00:17:17.826 GoRPCClient: error on JSON-RPC call 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 2024/07/15 19:45:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:17.826 request: 00:17:17.826 { 00:17:17.826 "method": "bdev_nvme_attach_controller", 00:17:17.826 "params": { 00:17:17.826 "name": "NVMe0", 00:17:17.826 "trtype": "tcp", 00:17:17.826 "traddr": "10.0.0.2", 00:17:17.826 "adrfam": "ipv4", 00:17:17.826 "trsvcid": "4420", 00:17:17.826 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.826 "hostaddr": "10.0.0.2", 00:17:17.826 "hostsvcid": "60000", 00:17:17.826 "prchk_reftag": false, 00:17:17.826 "prchk_guard": false, 00:17:17.826 "hdgst": false, 00:17:17.826 "ddgst": false 00:17:17.826 } 00:17:17.826 } 00:17:17.826 Got JSON-RPC error response 00:17:17.826 GoRPCClient: error on JSON-RPC call 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 2024/07/15 19:45:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:17.826 request: 00:17:17.826 { 00:17:17.826 "method": "bdev_nvme_attach_controller", 00:17:17.826 "params": { 00:17:17.826 "name": "NVMe0", 00:17:17.826 "trtype": "tcp", 00:17:17.826 "traddr": "10.0.0.2", 00:17:17.826 "adrfam": "ipv4", 00:17:17.826 "trsvcid": "4420", 00:17:17.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.826 "hostaddr": "10.0.0.2", 00:17:17.826 "hostsvcid": "60000", 00:17:17.826 "prchk_reftag": false, 00:17:17.826 "prchk_guard": false, 00:17:17.826 "hdgst": false, 00:17:17.826 "ddgst": false, 00:17:17.826 "multipath": "disable" 00:17:17.826 } 00:17:17.826 } 00:17:17.826 Got JSON-RPC error response 00:17:17.826 GoRPCClient: error on JSON-RPC call 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 2024/07/15 19:45:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:17.826 request: 00:17:17.826 { 00:17:17.826 "method": "bdev_nvme_attach_controller", 00:17:17.826 "params": { 00:17:17.826 "name": "NVMe0", 00:17:17.826 "trtype": "tcp", 00:17:17.826 "traddr": "10.0.0.2", 00:17:17.826 "adrfam": "ipv4", 00:17:17.826 "trsvcid": "4420", 00:17:17.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.826 "hostaddr": "10.0.0.2", 00:17:17.826 "hostsvcid": "60000", 00:17:17.826 "prchk_reftag": false, 00:17:17.826 "prchk_guard": false, 00:17:17.826 "hdgst": false, 00:17:17.826 "ddgst": false, 00:17:17.826 "multipath": "failover" 00:17:17.826 } 00:17:17.826 } 00:17:17.826 Got JSON-RPC error response 00:17:17.826 GoRPCClient: error on JSON-RPC call 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:17.826 19:45:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.196 0 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86238 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86238 ']' 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86238 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86238 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.196 killing process with pid 86238 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86238' 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86238 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86238 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.196 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:17:19.454 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:19.454 [2024-07-15 19:45:42.336932] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:19.454 [2024-07-15 19:45:42.337075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86238 ] 00:17:19.454 [2024-07-15 19:45:42.478699] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.454 [2024-07-15 19:45:42.601658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.454 [2024-07-15 19:45:43.540379] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name b2fde617-971a-4f60-8a85-c9961241a7f7 already exists 00:17:19.454 [2024-07-15 19:45:43.540464] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:b2fde617-971a-4f60-8a85-c9961241a7f7 alias for bdev NVMe1n1 00:17:19.454 [2024-07-15 19:45:43.540497] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:19.454 Running I/O for 1 seconds... 00:17:19.454 00:17:19.454 Latency(us) 00:17:19.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.454 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:19.454 NVMe0n1 : 1.01 21017.53 82.10 0.00 0.00 6081.28 2189.50 12392.26 00:17:19.454 =================================================================================================================== 00:17:19.454 Total : 21017.53 82.10 0.00 0.00 6081.28 2189.50 12392.26 00:17:19.454 Received shutdown signal, test time was about 1.000000 seconds 00:17:19.454 00:17:19.454 Latency(us) 00:17:19.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.454 =================================================================================================================== 00:17:19.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.454 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.454 19:45:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.454 rmmod nvme_tcp 00:17:19.454 rmmod nvme_fabrics 00:17:19.454 rmmod nvme_keyring 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 86180 ']' 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 86180 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86180 ']' 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86180 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86180 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:19.454 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:19.454 killing process with pid 86180 00:17:19.455 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86180' 00:17:19.455 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86180 00:17:19.455 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86180 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.712 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.971 19:45:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:19.971 00:17:19.971 real 0m4.947s 00:17:19.971 user 0m15.170s 00:17:19.971 sys 0m1.095s 00:17:19.971 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.971 19:45:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:19.971 ************************************ 00:17:19.971 END TEST nvmf_multicontroller 00:17:19.971 ************************************ 00:17:19.971 19:45:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:19.971 19:45:45 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:19.971 19:45:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:19.971 19:45:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.971 19:45:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.971 ************************************ 00:17:19.971 START TEST nvmf_aer 00:17:19.971 ************************************ 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:19.971 * Looking for test storage... 00:17:19.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:19.971 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:19.972 Cannot find device "nvmf_tgt_br" 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.972 Cannot find device "nvmf_tgt_br2" 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:19.972 Cannot find device "nvmf_tgt_br" 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:19.972 Cannot find device "nvmf_tgt_br2" 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:17:19.972 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.230 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:20.230 00:17:20.230 --- 10.0.0.2 ping statistics --- 00:17:20.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.230 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:17:20.231 00:17:20.231 --- 10.0.0.3 ping statistics --- 00:17:20.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.231 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:20.231 00:17:20.231 --- 10.0.0.1 ping statistics --- 00:17:20.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.231 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.231 19:45:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86488 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86488 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86488 ']' 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.489 19:45:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.489 [2024-07-15 19:45:46.085246] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:20.489 [2024-07-15 19:45:46.086063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.489 [2024-07-15 19:45:46.231121] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.747 [2024-07-15 19:45:46.374665] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.747 [2024-07-15 19:45:46.374738] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.747 [2024-07-15 19:45:46.374753] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.747 [2024-07-15 19:45:46.374764] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.747 [2024-07-15 19:45:46.374773] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.747 [2024-07-15 19:45:46.374930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.747 [2024-07-15 19:45:46.375997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.747 [2024-07-15 19:45:46.376082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.747 [2024-07-15 19:45:46.376088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 [2024-07-15 19:45:47.163957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 Malloc0 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 [2024-07-15 19:45:47.249089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 [ 00:17:21.680 { 00:17:21.680 "allow_any_host": true, 00:17:21.680 "hosts": [], 00:17:21.680 "listen_addresses": [], 00:17:21.680 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:21.680 "subtype": "Discovery" 00:17:21.680 }, 00:17:21.680 { 00:17:21.680 "allow_any_host": true, 00:17:21.680 "hosts": [], 00:17:21.680 "listen_addresses": [ 00:17:21.680 { 00:17:21.680 "adrfam": "IPv4", 00:17:21.680 "traddr": "10.0.0.2", 00:17:21.680 "trsvcid": "4420", 00:17:21.680 "trtype": "TCP" 00:17:21.680 } 00:17:21.680 ], 00:17:21.680 "max_cntlid": 65519, 00:17:21.680 "max_namespaces": 2, 00:17:21.680 "min_cntlid": 1, 00:17:21.680 "model_number": "SPDK bdev Controller", 00:17:21.680 "namespaces": [ 00:17:21.680 { 00:17:21.680 "bdev_name": "Malloc0", 00:17:21.680 "name": "Malloc0", 00:17:21.680 "nguid": "666BF6B124134A40AA55740D4F99308B", 00:17:21.680 "nsid": 1, 00:17:21.680 "uuid": "666bf6b1-2413-4a40-aa55-740d4f99308b" 00:17:21.680 } 00:17:21.680 ], 00:17:21.680 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.680 "serial_number": "SPDK00000000000001", 00:17:21.680 "subtype": "NVMe" 00:17:21.680 } 00:17:21.680 ] 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86542 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:17:21.680 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 Malloc1 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 [ 00:17:21.939 { 00:17:21.939 "allow_any_host": true, 00:17:21.939 "hosts": [], 00:17:21.939 "listen_addresses": [], 00:17:21.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:21.939 "subtype": "Discovery" 00:17:21.939 }, 00:17:21.939 { 00:17:21.939 "allow_any_host": true, 00:17:21.939 "hosts": [], 00:17:21.939 "listen_addresses": [ 00:17:21.939 { 00:17:21.939 "adrfam": "IPv4", 00:17:21.939 "traddr": "10.0.0.2", 00:17:21.939 "trsvcid": "4420", 00:17:21.939 "trtype": "TCP" 00:17:21.939 } 00:17:21.939 ], 00:17:21.939 "max_cntlid": 65519, 00:17:21.939 "max_namespaces": 2, 00:17:21.939 "min_cntlid": 1, 00:17:21.939 "model_number": "SPDK bdev Controller", 00:17:21.939 "namespaces": [ 00:17:21.939 Asynchronous Event Request test 00:17:21.939 Attaching to 10.0.0.2 00:17:21.939 Attached to 10.0.0.2 00:17:21.939 Registering asynchronous event callbacks... 00:17:21.939 Starting namespace attribute notice tests for all controllers... 00:17:21.939 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:21.939 aer_cb - Changed Namespace 00:17:21.939 Cleaning up... 00:17:21.939 { 00:17:21.939 "bdev_name": "Malloc0", 00:17:21.939 "name": "Malloc0", 00:17:21.939 "nguid": "666BF6B124134A40AA55740D4F99308B", 00:17:21.939 "nsid": 1, 00:17:21.939 "uuid": "666bf6b1-2413-4a40-aa55-740d4f99308b" 00:17:21.939 }, 00:17:21.939 { 00:17:21.939 "bdev_name": "Malloc1", 00:17:21.939 "name": "Malloc1", 00:17:21.939 "nguid": "F88559963EFC49EF8B8394D650985419", 00:17:21.939 "nsid": 2, 00:17:21.939 "uuid": "f8855996-3efc-49ef-8b83-94d650985419" 00:17:21.939 } 00:17:21.939 ], 00:17:21.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.939 "serial_number": "SPDK00000000000001", 00:17:21.939 "subtype": "NVMe" 00:17:21.939 } 00:17:21.939 ] 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86542 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.939 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.939 rmmod nvme_tcp 00:17:22.198 rmmod nvme_fabrics 00:17:22.198 rmmod nvme_keyring 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86488 ']' 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86488 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86488 ']' 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86488 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86488 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:22.198 killing process with pid 86488 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86488' 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86488 00:17:22.198 19:45:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86488 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:22.457 00:17:22.457 real 0m2.530s 00:17:22.457 user 0m6.863s 00:17:22.457 sys 0m0.717s 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.457 19:45:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:22.457 ************************************ 00:17:22.457 END TEST nvmf_aer 00:17:22.457 ************************************ 00:17:22.457 19:45:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.457 19:45:48 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:22.457 19:45:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.457 19:45:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.457 19:45:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.457 ************************************ 00:17:22.457 START TEST nvmf_async_init 00:17:22.457 ************************************ 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:22.457 * Looking for test storage... 00:17:22.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.457 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:22.458 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8b5cdd269ed1420e89b66cda04c03b90 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:22.717 Cannot find device "nvmf_tgt_br" 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.717 Cannot find device "nvmf_tgt_br2" 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:22.717 Cannot find device "nvmf_tgt_br" 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:22.717 Cannot find device "nvmf_tgt_br2" 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:22.717 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:22.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:22.975 00:17:22.975 --- 10.0.0.2 ping statistics --- 00:17:22.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.975 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:22.975 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.975 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:22.975 00:17:22.975 --- 10.0.0.3 ping statistics --- 00:17:22.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.975 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:22.975 00:17:22.975 --- 10.0.0.1 ping statistics --- 00:17:22.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.975 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86718 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86718 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86718 ']' 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.975 19:45:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:22.975 [2024-07-15 19:45:48.660097] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:22.975 [2024-07-15 19:45:48.660263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.233 [2024-07-15 19:45:48.795204] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.233 [2024-07-15 19:45:48.897707] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.233 [2024-07-15 19:45:48.897802] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.233 [2024-07-15 19:45:48.897852] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.233 [2024-07-15 19:45:48.897861] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.233 [2024-07-15 19:45:48.897869] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.233 [2024-07-15 19:45:48.897896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.167 [2024-07-15 19:45:49.749847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.167 null0 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8b5cdd269ed1420e89b66cda04c03b90 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.167 [2024-07-15 19:45:49.789972] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.167 19:45:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.425 nvme0n1 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.425 [ 00:17:24.425 { 00:17:24.425 "aliases": [ 00:17:24.425 "8b5cdd26-9ed1-420e-89b6-6cda04c03b90" 00:17:24.425 ], 00:17:24.425 "assigned_rate_limits": { 00:17:24.425 "r_mbytes_per_sec": 0, 00:17:24.425 "rw_ios_per_sec": 0, 00:17:24.425 "rw_mbytes_per_sec": 0, 00:17:24.425 "w_mbytes_per_sec": 0 00:17:24.425 }, 00:17:24.425 "block_size": 512, 00:17:24.425 "claimed": false, 00:17:24.425 "driver_specific": { 00:17:24.425 "mp_policy": "active_passive", 00:17:24.425 "nvme": [ 00:17:24.425 { 00:17:24.425 "ctrlr_data": { 00:17:24.425 "ana_reporting": false, 00:17:24.425 "cntlid": 1, 00:17:24.425 "firmware_revision": "24.09", 00:17:24.425 "model_number": "SPDK bdev Controller", 00:17:24.425 "multi_ctrlr": true, 00:17:24.425 "oacs": { 00:17:24.425 "firmware": 0, 00:17:24.425 "format": 0, 00:17:24.425 "ns_manage": 0, 00:17:24.425 "security": 0 00:17:24.425 }, 00:17:24.425 "serial_number": "00000000000000000000", 00:17:24.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.425 "vendor_id": "0x8086" 00:17:24.425 }, 00:17:24.425 "ns_data": { 00:17:24.425 "can_share": true, 00:17:24.425 "id": 1 00:17:24.425 }, 00:17:24.425 "trid": { 00:17:24.425 "adrfam": "IPv4", 00:17:24.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.425 "traddr": "10.0.0.2", 00:17:24.425 "trsvcid": "4420", 00:17:24.425 "trtype": "TCP" 00:17:24.425 }, 00:17:24.425 "vs": { 00:17:24.425 "nvme_version": "1.3" 00:17:24.425 } 00:17:24.425 } 00:17:24.425 ] 00:17:24.425 }, 00:17:24.425 "memory_domains": [ 00:17:24.425 { 00:17:24.425 "dma_device_id": "system", 00:17:24.425 "dma_device_type": 1 00:17:24.425 } 00:17:24.425 ], 00:17:24.425 "name": "nvme0n1", 00:17:24.425 "num_blocks": 2097152, 00:17:24.425 "product_name": "NVMe disk", 00:17:24.425 "supported_io_types": { 00:17:24.425 "abort": true, 00:17:24.425 "compare": true, 00:17:24.425 "compare_and_write": true, 00:17:24.425 "copy": true, 00:17:24.425 "flush": true, 00:17:24.425 "get_zone_info": false, 00:17:24.425 "nvme_admin": true, 00:17:24.425 "nvme_io": true, 00:17:24.425 "nvme_io_md": false, 00:17:24.425 "nvme_iov_md": false, 00:17:24.425 "read": true, 00:17:24.425 "reset": true, 00:17:24.425 "seek_data": false, 00:17:24.425 "seek_hole": false, 00:17:24.425 "unmap": false, 00:17:24.425 "write": true, 00:17:24.425 "write_zeroes": true, 00:17:24.425 "zcopy": false, 00:17:24.425 "zone_append": false, 00:17:24.425 "zone_management": false 00:17:24.425 }, 00:17:24.425 "uuid": "8b5cdd26-9ed1-420e-89b6-6cda04c03b90", 00:17:24.425 "zoned": false 00:17:24.425 } 00:17:24.425 ] 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.425 [2024-07-15 19:45:50.059008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:24.425 [2024-07-15 19:45:50.059219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa09c20 (9): Bad file descriptor 00:17:24.425 [2024-07-15 19:45:50.191300] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.425 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.425 [ 00:17:24.425 { 00:17:24.425 "aliases": [ 00:17:24.425 "8b5cdd26-9ed1-420e-89b6-6cda04c03b90" 00:17:24.425 ], 00:17:24.425 "assigned_rate_limits": { 00:17:24.425 "r_mbytes_per_sec": 0, 00:17:24.425 "rw_ios_per_sec": 0, 00:17:24.425 "rw_mbytes_per_sec": 0, 00:17:24.425 "w_mbytes_per_sec": 0 00:17:24.425 }, 00:17:24.425 "block_size": 512, 00:17:24.425 "claimed": false, 00:17:24.425 "driver_specific": { 00:17:24.425 "mp_policy": "active_passive", 00:17:24.425 "nvme": [ 00:17:24.425 { 00:17:24.425 "ctrlr_data": { 00:17:24.425 "ana_reporting": false, 00:17:24.425 "cntlid": 2, 00:17:24.425 "firmware_revision": "24.09", 00:17:24.425 "model_number": "SPDK bdev Controller", 00:17:24.425 "multi_ctrlr": true, 00:17:24.425 "oacs": { 00:17:24.425 "firmware": 0, 00:17:24.683 "format": 0, 00:17:24.683 "ns_manage": 0, 00:17:24.683 "security": 0 00:17:24.683 }, 00:17:24.683 "serial_number": "00000000000000000000", 00:17:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.683 "vendor_id": "0x8086" 00:17:24.683 }, 00:17:24.683 "ns_data": { 00:17:24.683 "can_share": true, 00:17:24.683 "id": 1 00:17:24.683 }, 00:17:24.683 "trid": { 00:17:24.683 "adrfam": "IPv4", 00:17:24.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.683 "traddr": "10.0.0.2", 00:17:24.683 "trsvcid": "4420", 00:17:24.683 "trtype": "TCP" 00:17:24.683 }, 00:17:24.683 "vs": { 00:17:24.683 "nvme_version": "1.3" 00:17:24.683 } 00:17:24.683 } 00:17:24.683 ] 00:17:24.683 }, 00:17:24.683 "memory_domains": [ 00:17:24.683 { 00:17:24.683 "dma_device_id": "system", 00:17:24.683 "dma_device_type": 1 00:17:24.683 } 00:17:24.683 ], 00:17:24.683 "name": "nvme0n1", 00:17:24.683 "num_blocks": 2097152, 00:17:24.683 "product_name": "NVMe disk", 00:17:24.683 "supported_io_types": { 00:17:24.683 "abort": true, 00:17:24.683 "compare": true, 00:17:24.683 "compare_and_write": true, 00:17:24.683 "copy": true, 00:17:24.683 "flush": true, 00:17:24.683 "get_zone_info": false, 00:17:24.683 "nvme_admin": true, 00:17:24.683 "nvme_io": true, 00:17:24.683 "nvme_io_md": false, 00:17:24.683 "nvme_iov_md": false, 00:17:24.683 "read": true, 00:17:24.683 "reset": true, 00:17:24.683 "seek_data": false, 00:17:24.683 "seek_hole": false, 00:17:24.683 "unmap": false, 00:17:24.683 "write": true, 00:17:24.683 "write_zeroes": true, 00:17:24.683 "zcopy": false, 00:17:24.683 "zone_append": false, 00:17:24.683 "zone_management": false 00:17:24.683 }, 00:17:24.683 "uuid": "8b5cdd26-9ed1-420e-89b6-6cda04c03b90", 00:17:24.683 "zoned": false 00:17:24.683 } 00:17:24.683 ] 00:17:24.683 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.683 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xD3S08zQbU 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xD3S08zQbU 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.684 [2024-07-15 19:45:50.251126] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.684 [2024-07-15 19:45:50.251342] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xD3S08zQbU 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.684 [2024-07-15 19:45:50.259104] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xD3S08zQbU 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.684 [2024-07-15 19:45:50.267098] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.684 [2024-07-15 19:45:50.267216] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:24.684 nvme0n1 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.684 [ 00:17:24.684 { 00:17:24.684 "aliases": [ 00:17:24.684 "8b5cdd26-9ed1-420e-89b6-6cda04c03b90" 00:17:24.684 ], 00:17:24.684 "assigned_rate_limits": { 00:17:24.684 "r_mbytes_per_sec": 0, 00:17:24.684 "rw_ios_per_sec": 0, 00:17:24.684 "rw_mbytes_per_sec": 0, 00:17:24.684 "w_mbytes_per_sec": 0 00:17:24.684 }, 00:17:24.684 "block_size": 512, 00:17:24.684 "claimed": false, 00:17:24.684 "driver_specific": { 00:17:24.684 "mp_policy": "active_passive", 00:17:24.684 "nvme": [ 00:17:24.684 { 00:17:24.684 "ctrlr_data": { 00:17:24.684 "ana_reporting": false, 00:17:24.684 "cntlid": 3, 00:17:24.684 "firmware_revision": "24.09", 00:17:24.684 "model_number": "SPDK bdev Controller", 00:17:24.684 "multi_ctrlr": true, 00:17:24.684 "oacs": { 00:17:24.684 "firmware": 0, 00:17:24.684 "format": 0, 00:17:24.684 "ns_manage": 0, 00:17:24.684 "security": 0 00:17:24.684 }, 00:17:24.684 "serial_number": "00000000000000000000", 00:17:24.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.684 "vendor_id": "0x8086" 00:17:24.684 }, 00:17:24.684 "ns_data": { 00:17:24.684 "can_share": true, 00:17:24.684 "id": 1 00:17:24.684 }, 00:17:24.684 "trid": { 00:17:24.684 "adrfam": "IPv4", 00:17:24.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:24.684 "traddr": "10.0.0.2", 00:17:24.684 "trsvcid": "4421", 00:17:24.684 "trtype": "TCP" 00:17:24.684 }, 00:17:24.684 "vs": { 00:17:24.684 "nvme_version": "1.3" 00:17:24.684 } 00:17:24.684 } 00:17:24.684 ] 00:17:24.684 }, 00:17:24.684 "memory_domains": [ 00:17:24.684 { 00:17:24.684 "dma_device_id": "system", 00:17:24.684 "dma_device_type": 1 00:17:24.684 } 00:17:24.684 ], 00:17:24.684 "name": "nvme0n1", 00:17:24.684 "num_blocks": 2097152, 00:17:24.684 "product_name": "NVMe disk", 00:17:24.684 "supported_io_types": { 00:17:24.684 "abort": true, 00:17:24.684 "compare": true, 00:17:24.684 "compare_and_write": true, 00:17:24.684 "copy": true, 00:17:24.684 "flush": true, 00:17:24.684 "get_zone_info": false, 00:17:24.684 "nvme_admin": true, 00:17:24.684 "nvme_io": true, 00:17:24.684 "nvme_io_md": false, 00:17:24.684 "nvme_iov_md": false, 00:17:24.684 "read": true, 00:17:24.684 "reset": true, 00:17:24.684 "seek_data": false, 00:17:24.684 "seek_hole": false, 00:17:24.684 "unmap": false, 00:17:24.684 "write": true, 00:17:24.684 "write_zeroes": true, 00:17:24.684 "zcopy": false, 00:17:24.684 "zone_append": false, 00:17:24.684 "zone_management": false 00:17:24.684 }, 00:17:24.684 "uuid": "8b5cdd26-9ed1-420e-89b6-6cda04c03b90", 00:17:24.684 "zoned": false 00:17:24.684 } 00:17:24.684 ] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.xD3S08zQbU 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.684 rmmod nvme_tcp 00:17:24.684 rmmod nvme_fabrics 00:17:24.684 rmmod nvme_keyring 00:17:24.684 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86718 ']' 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86718 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86718 ']' 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86718 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86718 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:24.941 killing process with pid 86718 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86718' 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86718 00:17:24.941 [2024-07-15 19:45:50.501120] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:24.941 [2024-07-15 19:45:50.501194] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:24.941 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86718 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:25.200 00:17:25.200 real 0m2.628s 00:17:25.200 user 0m2.504s 00:17:25.200 sys 0m0.612s 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.200 ************************************ 00:17:25.200 END TEST nvmf_async_init 00:17:25.200 19:45:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.200 ************************************ 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.200 19:45:50 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.200 ************************************ 00:17:25.200 START TEST dma 00:17:25.200 ************************************ 00:17:25.200 19:45:50 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:25.200 * Looking for test storage... 00:17:25.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:25.200 19:45:50 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.200 19:45:50 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.200 19:45:50 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.200 19:45:50 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.200 19:45:50 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.200 19:45:50 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.200 19:45:50 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.200 19:45:50 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:25.200 19:45:50 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.200 19:45:50 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.200 19:45:50 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:25.200 19:45:50 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:25.200 00:17:25.200 real 0m0.110s 00:17:25.200 user 0m0.051s 00:17:25.200 sys 0m0.065s 00:17:25.200 19:45:50 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.200 19:45:50 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:25.200 ************************************ 00:17:25.200 END TEST dma 00:17:25.200 ************************************ 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.200 19:45:50 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.200 19:45:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.200 ************************************ 00:17:25.200 START TEST nvmf_identify 00:17:25.200 ************************************ 00:17:25.200 19:45:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:25.458 * Looking for test storage... 00:17:25.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.458 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:25.459 Cannot find device "nvmf_tgt_br" 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:25.459 Cannot find device "nvmf_tgt_br2" 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:25.459 Cannot find device "nvmf_tgt_br" 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:25.459 Cannot find device "nvmf_tgt_br2" 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.459 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:25.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:17:25.717 00:17:25.717 --- 10.0.0.2 ping statistics --- 00:17:25.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.717 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:25.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:25.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:25.717 00:17:25.717 --- 10.0.0.3 ping statistics --- 00:17:25.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.717 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:25.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:17:25.717 00:17:25.717 --- 10.0.0.1 ping statistics --- 00:17:25.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.717 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86990 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86990 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86990 ']' 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.717 19:45:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:25.975 [2024-07-15 19:45:51.534645] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:25.975 [2024-07-15 19:45:51.534750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.975 [2024-07-15 19:45:51.679544] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.232 [2024-07-15 19:45:51.812843] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.232 [2024-07-15 19:45:51.812920] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.232 [2024-07-15 19:45:51.812946] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.232 [2024-07-15 19:45:51.812968] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.232 [2024-07-15 19:45:51.812978] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.232 [2024-07-15 19:45:51.813131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.232 [2024-07-15 19:45:51.813479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.232 [2024-07-15 19:45:51.814397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.232 [2024-07-15 19:45:51.814405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:26.800 [2024-07-15 19:45:52.550172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.800 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.058 Malloc0 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.058 [2024-07-15 19:45:52.661969] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.058 [ 00:17:27.058 { 00:17:27.058 "allow_any_host": true, 00:17:27.058 "hosts": [], 00:17:27.058 "listen_addresses": [ 00:17:27.058 { 00:17:27.058 "adrfam": "IPv4", 00:17:27.058 "traddr": "10.0.0.2", 00:17:27.058 "trsvcid": "4420", 00:17:27.058 "trtype": "TCP" 00:17:27.058 } 00:17:27.058 ], 00:17:27.058 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:27.058 "subtype": "Discovery" 00:17:27.058 }, 00:17:27.058 { 00:17:27.058 "allow_any_host": true, 00:17:27.058 "hosts": [], 00:17:27.058 "listen_addresses": [ 00:17:27.058 { 00:17:27.058 "adrfam": "IPv4", 00:17:27.058 "traddr": "10.0.0.2", 00:17:27.058 "trsvcid": "4420", 00:17:27.058 "trtype": "TCP" 00:17:27.058 } 00:17:27.058 ], 00:17:27.058 "max_cntlid": 65519, 00:17:27.058 "max_namespaces": 32, 00:17:27.058 "min_cntlid": 1, 00:17:27.058 "model_number": "SPDK bdev Controller", 00:17:27.058 "namespaces": [ 00:17:27.058 { 00:17:27.058 "bdev_name": "Malloc0", 00:17:27.058 "eui64": "ABCDEF0123456789", 00:17:27.058 "name": "Malloc0", 00:17:27.058 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:27.058 "nsid": 1, 00:17:27.058 "uuid": "095acf01-2ede-4f5c-870d-5e97e3cf3a05" 00:17:27.058 } 00:17:27.058 ], 00:17:27.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.058 "serial_number": "SPDK00000000000001", 00:17:27.058 "subtype": "NVMe" 00:17:27.058 } 00:17:27.058 ] 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.058 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:27.058 [2024-07-15 19:45:52.714179] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:27.058 [2024-07-15 19:45:52.714255] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87043 ] 00:17:27.320 [2024-07-15 19:45:52.854491] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:27.320 [2024-07-15 19:45:52.854595] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:27.320 [2024-07-15 19:45:52.854603] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:27.320 [2024-07-15 19:45:52.854616] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:27.320 [2024-07-15 19:45:52.854624] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:27.320 [2024-07-15 19:45:52.854789] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:27.320 [2024-07-15 19:45:52.854838] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd9fc00 0 00:17:27.320 [2024-07-15 19:45:52.859237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:27.320 [2024-07-15 19:45:52.859292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:27.320 [2024-07-15 19:45:52.859298] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:27.320 [2024-07-15 19:45:52.859302] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:27.320 [2024-07-15 19:45:52.859364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.859372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.859377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.320 [2024-07-15 19:45:52.859392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:27.320 [2024-07-15 19:45:52.859428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.320 [2024-07-15 19:45:52.867177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.320 [2024-07-15 19:45:52.867200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.320 [2024-07-15 19:45:52.867229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.320 [2024-07-15 19:45:52.867249] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:27.320 [2024-07-15 19:45:52.867269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:27.320 [2024-07-15 19:45:52.867275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:27.320 [2024-07-15 19:45:52.867309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.320 [2024-07-15 19:45:52.867329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.320 [2024-07-15 19:45:52.867365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.320 [2024-07-15 19:45:52.867445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.320 [2024-07-15 19:45:52.867452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.320 [2024-07-15 19:45:52.867456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.320 [2024-07-15 19:45:52.867466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:27.320 [2024-07-15 19:45:52.867474] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:27.320 [2024-07-15 19:45:52.867482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.320 [2024-07-15 19:45:52.867498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.320 [2024-07-15 19:45:52.867535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.320 [2024-07-15 19:45:52.867595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.320 [2024-07-15 19:45:52.867602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.320 [2024-07-15 19:45:52.867606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.320 [2024-07-15 19:45:52.867617] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:27.320 [2024-07-15 19:45:52.867626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:27.320 [2024-07-15 19:45:52.867650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.320 [2024-07-15 19:45:52.867665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.320 [2024-07-15 19:45:52.867684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.320 [2024-07-15 19:45:52.867741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.320 [2024-07-15 19:45:52.867748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.320 [2024-07-15 19:45:52.867752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.320 [2024-07-15 19:45:52.867762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:27.320 [2024-07-15 19:45:52.867772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.320 [2024-07-15 19:45:52.867789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.320 [2024-07-15 19:45:52.867807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.320 [2024-07-15 19:45:52.867862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.320 [2024-07-15 19:45:52.867869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.320 [2024-07-15 19:45:52.867873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.867877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.320 [2024-07-15 19:45:52.867882] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:27.320 [2024-07-15 19:45:52.867887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:27.320 [2024-07-15 19:45:52.867895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:27.320 [2024-07-15 19:45:52.868001] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:27.320 [2024-07-15 19:45:52.868006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:27.320 [2024-07-15 19:45:52.868016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.868020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.868024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.320 [2024-07-15 19:45:52.868032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.320 [2024-07-15 19:45:52.868051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.320 [2024-07-15 19:45:52.868105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.320 [2024-07-15 19:45:52.868112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.320 [2024-07-15 19:45:52.868116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.868120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.320 [2024-07-15 19:45:52.868125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:27.320 [2024-07-15 19:45:52.868136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.868140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.868144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.320 [2024-07-15 19:45:52.868152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.320 [2024-07-15 19:45:52.868187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.320 [2024-07-15 19:45:52.868261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.320 [2024-07-15 19:45:52.868270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.320 [2024-07-15 19:45:52.868274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.320 [2024-07-15 19:45:52.868278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.320 [2024-07-15 19:45:52.868283] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:27.320 [2024-07-15 19:45:52.868289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:27.320 [2024-07-15 19:45:52.868298] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:27.320 [2024-07-15 19:45:52.868308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:27.320 [2024-07-15 19:45:52.868319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.868332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.321 [2024-07-15 19:45:52.868354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.321 [2024-07-15 19:45:52.868450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.321 [2024-07-15 19:45:52.868457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.321 [2024-07-15 19:45:52.868461] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868465] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9fc00): datao=0, datal=4096, cccid=0 00:17:27.321 [2024-07-15 19:45:52.868470] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde29c0) on tqpair(0xd9fc00): expected_datao=0, payload_size=4096 00:17:27.321 [2024-07-15 19:45:52.868475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868483] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868488] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.321 [2024-07-15 19:45:52.868504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.321 [2024-07-15 19:45:52.868507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.321 [2024-07-15 19:45:52.868522] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:27.321 [2024-07-15 19:45:52.868527] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:27.321 [2024-07-15 19:45:52.868532] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:27.321 [2024-07-15 19:45:52.868538] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:27.321 [2024-07-15 19:45:52.868543] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:27.321 [2024-07-15 19:45:52.868548] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:27.321 [2024-07-15 19:45:52.868557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:27.321 [2024-07-15 19:45:52.868565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.868581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.321 [2024-07-15 19:45:52.868617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.321 [2024-07-15 19:45:52.868679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.321 [2024-07-15 19:45:52.868686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.321 [2024-07-15 19:45:52.868690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.321 [2024-07-15 19:45:52.868707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.868723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.321 [2024-07-15 19:45:52.868730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.868743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.321 [2024-07-15 19:45:52.868749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.868762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.321 [2024-07-15 19:45:52.868769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.868782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.321 [2024-07-15 19:45:52.868787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:27.321 [2024-07-15 19:45:52.868796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:27.321 [2024-07-15 19:45:52.868803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.868814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.321 [2024-07-15 19:45:52.868835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde29c0, cid 0, qid 0 00:17:27.321 [2024-07-15 19:45:52.868842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2b40, cid 1, qid 0 00:17:27.321 [2024-07-15 19:45:52.868847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2cc0, cid 2, qid 0 00:17:27.321 [2024-07-15 19:45:52.868852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.321 [2024-07-15 19:45:52.868857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2fc0, cid 4, qid 0 00:17:27.321 [2024-07-15 19:45:52.868948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.321 [2024-07-15 19:45:52.868955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.321 [2024-07-15 19:45:52.868958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2fc0) on tqpair=0xd9fc00 00:17:27.321 [2024-07-15 19:45:52.868972] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:27.321 [2024-07-15 19:45:52.868978] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:27.321 [2024-07-15 19:45:52.868989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.868994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.869002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.321 [2024-07-15 19:45:52.869022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2fc0, cid 4, qid 0 00:17:27.321 [2024-07-15 19:45:52.869105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.321 [2024-07-15 19:45:52.869112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.321 [2024-07-15 19:45:52.869116] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869120] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9fc00): datao=0, datal=4096, cccid=4 00:17:27.321 [2024-07-15 19:45:52.869125] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde2fc0) on tqpair(0xd9fc00): expected_datao=0, payload_size=4096 00:17:27.321 [2024-07-15 19:45:52.869130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869137] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869141] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.321 [2024-07-15 19:45:52.869156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.321 [2024-07-15 19:45:52.869159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2fc0) on tqpair=0xd9fc00 00:17:27.321 [2024-07-15 19:45:52.869178] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:27.321 [2024-07-15 19:45:52.869230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.869246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.321 [2024-07-15 19:45:52.869254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.869269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.321 [2024-07-15 19:45:52.869297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2fc0, cid 4, qid 0 00:17:27.321 [2024-07-15 19:45:52.869304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde3140, cid 5, qid 0 00:17:27.321 [2024-07-15 19:45:52.869410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.321 [2024-07-15 19:45:52.869417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.321 [2024-07-15 19:45:52.869421] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869425] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9fc00): datao=0, datal=1024, cccid=4 00:17:27.321 [2024-07-15 19:45:52.869444] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde2fc0) on tqpair(0xd9fc00): expected_datao=0, payload_size=1024 00:17:27.321 [2024-07-15 19:45:52.869449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869456] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869460] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.321 [2024-07-15 19:45:52.869471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.321 [2024-07-15 19:45:52.869477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.869481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde3140) on tqpair=0xd9fc00 00:17:27.321 [2024-07-15 19:45:52.914215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.321 [2024-07-15 19:45:52.914251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.321 [2024-07-15 19:45:52.914274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.914281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2fc0) on tqpair=0xd9fc00 00:17:27.321 [2024-07-15 19:45:52.914313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.321 [2024-07-15 19:45:52.914319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9fc00) 00:17:27.321 [2024-07-15 19:45:52.914335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.322 [2024-07-15 19:45:52.914378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2fc0, cid 4, qid 0 00:17:27.322 [2024-07-15 19:45:52.914501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.322 [2024-07-15 19:45:52.914509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.322 [2024-07-15 19:45:52.914512] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914517] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9fc00): datao=0, datal=3072, cccid=4 00:17:27.322 [2024-07-15 19:45:52.914522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde2fc0) on tqpair(0xd9fc00): expected_datao=0, payload_size=3072 00:17:27.322 [2024-07-15 19:45:52.914527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914537] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914542] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.322 [2024-07-15 19:45:52.914557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.322 [2024-07-15 19:45:52.914561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2fc0) on tqpair=0xd9fc00 00:17:27.322 [2024-07-15 19:45:52.914578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd9fc00) 00:17:27.322 [2024-07-15 19:45:52.914591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.322 [2024-07-15 19:45:52.914619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2fc0, cid 4, qid 0 00:17:27.322 [2024-07-15 19:45:52.914701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.322 [2024-07-15 19:45:52.914708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.322 [2024-07-15 19:45:52.914712] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd9fc00): datao=0, datal=8, cccid=4 00:17:27.322 [2024-07-15 19:45:52.914721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xde2fc0) on tqpair(0xd9fc00): expected_datao=0, payload_size=8 00:17:27.322 [2024-07-15 19:45:52.914725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914732] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.914736] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.955335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.322 [2024-07-15 19:45:52.955372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.322 [2024-07-15 19:45:52.955393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.322 [2024-07-15 19:45:52.955399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2fc0) on tqpair=0xd9fc00 00:17:27.322 ===================================================== 00:17:27.322 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:27.322 ===================================================== 00:17:27.322 Controller Capabilities/Features 00:17:27.322 ================================ 00:17:27.322 Vendor ID: 0000 00:17:27.322 Subsystem Vendor ID: 0000 00:17:27.322 Serial Number: .................... 00:17:27.322 Model Number: ........................................ 00:17:27.322 Firmware Version: 24.09 00:17:27.322 Recommended Arb Burst: 0 00:17:27.322 IEEE OUI Identifier: 00 00 00 00:17:27.322 Multi-path I/O 00:17:27.322 May have multiple subsystem ports: No 00:17:27.322 May have multiple controllers: No 00:17:27.322 Associated with SR-IOV VF: No 00:17:27.322 Max Data Transfer Size: 131072 00:17:27.322 Max Number of Namespaces: 0 00:17:27.322 Max Number of I/O Queues: 1024 00:17:27.322 NVMe Specification Version (VS): 1.3 00:17:27.322 NVMe Specification Version (Identify): 1.3 00:17:27.322 Maximum Queue Entries: 128 00:17:27.322 Contiguous Queues Required: Yes 00:17:27.322 Arbitration Mechanisms Supported 00:17:27.322 Weighted Round Robin: Not Supported 00:17:27.322 Vendor Specific: Not Supported 00:17:27.322 Reset Timeout: 15000 ms 00:17:27.322 Doorbell Stride: 4 bytes 00:17:27.322 NVM Subsystem Reset: Not Supported 00:17:27.322 Command Sets Supported 00:17:27.322 NVM Command Set: Supported 00:17:27.322 Boot Partition: Not Supported 00:17:27.322 Memory Page Size Minimum: 4096 bytes 00:17:27.322 Memory Page Size Maximum: 4096 bytes 00:17:27.322 Persistent Memory Region: Not Supported 00:17:27.322 Optional Asynchronous Events Supported 00:17:27.322 Namespace Attribute Notices: Not Supported 00:17:27.322 Firmware Activation Notices: Not Supported 00:17:27.322 ANA Change Notices: Not Supported 00:17:27.322 PLE Aggregate Log Change Notices: Not Supported 00:17:27.322 LBA Status Info Alert Notices: Not Supported 00:17:27.322 EGE Aggregate Log Change Notices: Not Supported 00:17:27.322 Normal NVM Subsystem Shutdown event: Not Supported 00:17:27.322 Zone Descriptor Change Notices: Not Supported 00:17:27.322 Discovery Log Change Notices: Supported 00:17:27.322 Controller Attributes 00:17:27.322 128-bit Host Identifier: Not Supported 00:17:27.322 Non-Operational Permissive Mode: Not Supported 00:17:27.322 NVM Sets: Not Supported 00:17:27.322 Read Recovery Levels: Not Supported 00:17:27.322 Endurance Groups: Not Supported 00:17:27.322 Predictable Latency Mode: Not Supported 00:17:27.322 Traffic Based Keep ALive: Not Supported 00:17:27.322 Namespace Granularity: Not Supported 00:17:27.322 SQ Associations: Not Supported 00:17:27.322 UUID List: Not Supported 00:17:27.322 Multi-Domain Subsystem: Not Supported 00:17:27.322 Fixed Capacity Management: Not Supported 00:17:27.322 Variable Capacity Management: Not Supported 00:17:27.322 Delete Endurance Group: Not Supported 00:17:27.322 Delete NVM Set: Not Supported 00:17:27.322 Extended LBA Formats Supported: Not Supported 00:17:27.322 Flexible Data Placement Supported: Not Supported 00:17:27.322 00:17:27.322 Controller Memory Buffer Support 00:17:27.322 ================================ 00:17:27.322 Supported: No 00:17:27.322 00:17:27.322 Persistent Memory Region Support 00:17:27.322 ================================ 00:17:27.322 Supported: No 00:17:27.322 00:17:27.322 Admin Command Set Attributes 00:17:27.322 ============================ 00:17:27.322 Security Send/Receive: Not Supported 00:17:27.322 Format NVM: Not Supported 00:17:27.322 Firmware Activate/Download: Not Supported 00:17:27.322 Namespace Management: Not Supported 00:17:27.322 Device Self-Test: Not Supported 00:17:27.322 Directives: Not Supported 00:17:27.322 NVMe-MI: Not Supported 00:17:27.322 Virtualization Management: Not Supported 00:17:27.322 Doorbell Buffer Config: Not Supported 00:17:27.322 Get LBA Status Capability: Not Supported 00:17:27.322 Command & Feature Lockdown Capability: Not Supported 00:17:27.322 Abort Command Limit: 1 00:17:27.322 Async Event Request Limit: 4 00:17:27.322 Number of Firmware Slots: N/A 00:17:27.322 Firmware Slot 1 Read-Only: N/A 00:17:27.322 Firmware Activation Without Reset: N/A 00:17:27.322 Multiple Update Detection Support: N/A 00:17:27.322 Firmware Update Granularity: No Information Provided 00:17:27.322 Per-Namespace SMART Log: No 00:17:27.322 Asymmetric Namespace Access Log Page: Not Supported 00:17:27.322 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:27.322 Command Effects Log Page: Not Supported 00:17:27.322 Get Log Page Extended Data: Supported 00:17:27.322 Telemetry Log Pages: Not Supported 00:17:27.322 Persistent Event Log Pages: Not Supported 00:17:27.322 Supported Log Pages Log Page: May Support 00:17:27.322 Commands Supported & Effects Log Page: Not Supported 00:17:27.322 Feature Identifiers & Effects Log Page:May Support 00:17:27.322 NVMe-MI Commands & Effects Log Page: May Support 00:17:27.322 Data Area 4 for Telemetry Log: Not Supported 00:17:27.322 Error Log Page Entries Supported: 128 00:17:27.322 Keep Alive: Not Supported 00:17:27.322 00:17:27.322 NVM Command Set Attributes 00:17:27.322 ========================== 00:17:27.322 Submission Queue Entry Size 00:17:27.322 Max: 1 00:17:27.322 Min: 1 00:17:27.322 Completion Queue Entry Size 00:17:27.322 Max: 1 00:17:27.322 Min: 1 00:17:27.322 Number of Namespaces: 0 00:17:27.322 Compare Command: Not Supported 00:17:27.322 Write Uncorrectable Command: Not Supported 00:17:27.322 Dataset Management Command: Not Supported 00:17:27.322 Write Zeroes Command: Not Supported 00:17:27.322 Set Features Save Field: Not Supported 00:17:27.322 Reservations: Not Supported 00:17:27.322 Timestamp: Not Supported 00:17:27.322 Copy: Not Supported 00:17:27.322 Volatile Write Cache: Not Present 00:17:27.322 Atomic Write Unit (Normal): 1 00:17:27.322 Atomic Write Unit (PFail): 1 00:17:27.322 Atomic Compare & Write Unit: 1 00:17:27.322 Fused Compare & Write: Supported 00:17:27.322 Scatter-Gather List 00:17:27.322 SGL Command Set: Supported 00:17:27.322 SGL Keyed: Supported 00:17:27.322 SGL Bit Bucket Descriptor: Not Supported 00:17:27.322 SGL Metadata Pointer: Not Supported 00:17:27.322 Oversized SGL: Not Supported 00:17:27.322 SGL Metadata Address: Not Supported 00:17:27.322 SGL Offset: Supported 00:17:27.322 Transport SGL Data Block: Not Supported 00:17:27.322 Replay Protected Memory Block: Not Supported 00:17:27.322 00:17:27.322 Firmware Slot Information 00:17:27.322 ========================= 00:17:27.322 Active slot: 0 00:17:27.322 00:17:27.322 00:17:27.322 Error Log 00:17:27.322 ========= 00:17:27.322 00:17:27.322 Active Namespaces 00:17:27.322 ================= 00:17:27.322 Discovery Log Page 00:17:27.322 ================== 00:17:27.322 Generation Counter: 2 00:17:27.322 Number of Records: 2 00:17:27.322 Record Format: 0 00:17:27.322 00:17:27.322 Discovery Log Entry 0 00:17:27.322 ---------------------- 00:17:27.322 Transport Type: 3 (TCP) 00:17:27.322 Address Family: 1 (IPv4) 00:17:27.323 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:27.323 Entry Flags: 00:17:27.323 Duplicate Returned Information: 1 00:17:27.323 Explicit Persistent Connection Support for Discovery: 1 00:17:27.323 Transport Requirements: 00:17:27.323 Secure Channel: Not Required 00:17:27.323 Port ID: 0 (0x0000) 00:17:27.323 Controller ID: 65535 (0xffff) 00:17:27.323 Admin Max SQ Size: 128 00:17:27.323 Transport Service Identifier: 4420 00:17:27.323 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:27.323 Transport Address: 10.0.0.2 00:17:27.323 Discovery Log Entry 1 00:17:27.323 ---------------------- 00:17:27.323 Transport Type: 3 (TCP) 00:17:27.323 Address Family: 1 (IPv4) 00:17:27.323 Subsystem Type: 2 (NVM Subsystem) 00:17:27.323 Entry Flags: 00:17:27.323 Duplicate Returned Information: 0 00:17:27.323 Explicit Persistent Connection Support for Discovery: 0 00:17:27.323 Transport Requirements: 00:17:27.323 Secure Channel: Not Required 00:17:27.323 Port ID: 0 (0x0000) 00:17:27.323 Controller ID: 65535 (0xffff) 00:17:27.323 Admin Max SQ Size: 128 00:17:27.323 Transport Service Identifier: 4420 00:17:27.323 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:27.323 Transport Address: 10.0.0.2 [2024-07-15 19:45:52.955567] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:27.323 [2024-07-15 19:45:52.955584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde29c0) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.955593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.323 [2024-07-15 19:45:52.955600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2b40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.955604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.323 [2024-07-15 19:45:52.955610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2cc0) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.955614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.323 [2024-07-15 19:45:52.955619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.955624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.323 [2024-07-15 19:45:52.955638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.955657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.955685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.955770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.955777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.955781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.955794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.955810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.955834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.955934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.955940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.955944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.955962] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:27.323 [2024-07-15 19:45:52.955967] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:27.323 [2024-07-15 19:45:52.955978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.955986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.955993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.956013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.956069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.956076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.956079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.956095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.956110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.956128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.956197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.956205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.956209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.956260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.956279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.956302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.956358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.956365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.956369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.956384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.956401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.956420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.956477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.956484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.956488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.956502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.956519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.956538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.956604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.956611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.956630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.956644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.956660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.956677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.956731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.956737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.956741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.956755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.323 [2024-07-15 19:45:52.956771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.323 [2024-07-15 19:45:52.956788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.323 [2024-07-15 19:45:52.956839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.323 [2024-07-15 19:45:52.956845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.323 [2024-07-15 19:45:52.956849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.323 [2024-07-15 19:45:52.956863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.323 [2024-07-15 19:45:52.956867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.956871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.956878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.956895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.956947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.956954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.956957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.956961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.956971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.956976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.956979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.956986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.957882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.957941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.957948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.957952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.957967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.957976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.324 [2024-07-15 19:45:52.957983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.324 [2024-07-15 19:45:52.958003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.324 [2024-07-15 19:45:52.958056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.324 [2024-07-15 19:45:52.958062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.324 [2024-07-15 19:45:52.958066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.958070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.324 [2024-07-15 19:45:52.958081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.958085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.324 [2024-07-15 19:45:52.958089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.958184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.958202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.958206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.958222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.958311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.958318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.958321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.958336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958340] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.958440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.958446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.958450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.958464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.958550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.958557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.958561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.958575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.958663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.958670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.958673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.958687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.958777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.958784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.958787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.958801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.958886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.958893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.958896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.958910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.958919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.958926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.958944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.959014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.959021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.959024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.959028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.959039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.959043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.959047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.959054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.959072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.959126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.959133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.959137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.959141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.959151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.959155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.959159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.959166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.963242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.963265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.963273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.963277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.963281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.963296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.963301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.963305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd9fc00) 00:17:27.325 [2024-07-15 19:45:52.963315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.325 [2024-07-15 19:45:52.963340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xde2e40, cid 3, qid 0 00:17:27.325 [2024-07-15 19:45:52.963402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.325 [2024-07-15 19:45:52.963409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.325 [2024-07-15 19:45:52.963412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.325 [2024-07-15 19:45:52.963416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xde2e40) on tqpair=0xd9fc00 00:17:27.325 [2024-07-15 19:45:52.963425] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:17:27.325 00:17:27.325 19:45:52 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:27.325 [2024-07-15 19:45:53.009222] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:27.325 [2024-07-15 19:45:53.009281] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87046 ] 00:17:27.587 [2024-07-15 19:45:53.147972] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:27.587 [2024-07-15 19:45:53.148051] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:27.587 [2024-07-15 19:45:53.148058] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:27.587 [2024-07-15 19:45:53.148072] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:27.587 [2024-07-15 19:45:53.148079] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:27.587 [2024-07-15 19:45:53.152257] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:27.587 [2024-07-15 19:45:53.152348] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7d1c00 0 00:17:27.587 [2024-07-15 19:45:53.152429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:27.587 [2024-07-15 19:45:53.152439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:27.588 [2024-07-15 19:45:53.152443] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:27.588 [2024-07-15 19:45:53.152447] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:27.588 [2024-07-15 19:45:53.152509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.152516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.152520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.588 [2024-07-15 19:45:53.152535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:27.588 [2024-07-15 19:45:53.152575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.588 [2024-07-15 19:45:53.160207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.588 [2024-07-15 19:45:53.160232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.588 [2024-07-15 19:45:53.160254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.160260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.588 [2024-07-15 19:45:53.160272] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:27.588 [2024-07-15 19:45:53.160281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:27.588 [2024-07-15 19:45:53.160288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:27.588 [2024-07-15 19:45:53.160309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.160315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.160319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.588 [2024-07-15 19:45:53.160331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.588 [2024-07-15 19:45:53.160361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.588 [2024-07-15 19:45:53.160426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.588 [2024-07-15 19:45:53.160433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.588 [2024-07-15 19:45:53.160437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.160441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.588 [2024-07-15 19:45:53.160447] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:27.588 [2024-07-15 19:45:53.160469] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:27.588 [2024-07-15 19:45:53.160477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.160481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.160484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.588 [2024-07-15 19:45:53.160492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.588 [2024-07-15 19:45:53.160527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.588 [2024-07-15 19:45:53.160583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.588 [2024-07-15 19:45:53.160590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.588 [2024-07-15 19:45:53.160593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.588 [2024-07-15 19:45:53.160598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.589 [2024-07-15 19:45:53.160604] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:27.589 [2024-07-15 19:45:53.160612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:27.589 [2024-07-15 19:45:53.160620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.589 [2024-07-15 19:45:53.160635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.589 [2024-07-15 19:45:53.160655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.589 [2024-07-15 19:45:53.160706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.589 [2024-07-15 19:45:53.160712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.589 [2024-07-15 19:45:53.160716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.589 [2024-07-15 19:45:53.160726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:27.589 [2024-07-15 19:45:53.160736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.589 [2024-07-15 19:45:53.160751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.589 [2024-07-15 19:45:53.160770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.589 [2024-07-15 19:45:53.160825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.589 [2024-07-15 19:45:53.160831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.589 [2024-07-15 19:45:53.160835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.589 [2024-07-15 19:45:53.160844] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:27.589 [2024-07-15 19:45:53.160849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:27.589 [2024-07-15 19:45:53.160858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:27.589 [2024-07-15 19:45:53.160964] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:27.589 [2024-07-15 19:45:53.160968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:27.589 [2024-07-15 19:45:53.160979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.160987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.589 [2024-07-15 19:45:53.160994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.589 [2024-07-15 19:45:53.161015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.589 [2024-07-15 19:45:53.161069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.589 [2024-07-15 19:45:53.161076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.589 [2024-07-15 19:45:53.161079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.161083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.589 [2024-07-15 19:45:53.161089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:27.589 [2024-07-15 19:45:53.161099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.161103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.589 [2024-07-15 19:45:53.161107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.589 [2024-07-15 19:45:53.161114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.589 [2024-07-15 19:45:53.161133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.589 [2024-07-15 19:45:53.161217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.590 [2024-07-15 19:45:53.161224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.590 [2024-07-15 19:45:53.161228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.590 [2024-07-15 19:45:53.161250] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:27.590 [2024-07-15 19:45:53.161256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:27.590 [2024-07-15 19:45:53.161265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:27.590 [2024-07-15 19:45:53.161276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:27.590 [2024-07-15 19:45:53.161288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.590 [2024-07-15 19:45:53.161301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.590 [2024-07-15 19:45:53.161325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.590 [2024-07-15 19:45:53.161427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.590 [2024-07-15 19:45:53.161434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.590 [2024-07-15 19:45:53.161438] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161442] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=4096, cccid=0 00:17:27.590 [2024-07-15 19:45:53.161448] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8149c0) on tqpair(0x7d1c00): expected_datao=0, payload_size=4096 00:17:27.590 [2024-07-15 19:45:53.161453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161463] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161468] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.590 [2024-07-15 19:45:53.161483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.590 [2024-07-15 19:45:53.161487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.590 [2024-07-15 19:45:53.161501] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:27.590 [2024-07-15 19:45:53.161507] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:27.590 [2024-07-15 19:45:53.161512] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:27.590 [2024-07-15 19:45:53.161516] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:27.590 [2024-07-15 19:45:53.161521] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:27.590 [2024-07-15 19:45:53.161527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:27.590 [2024-07-15 19:45:53.161537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:27.590 [2024-07-15 19:45:53.161546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.590 [2024-07-15 19:45:53.161570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.590 [2024-07-15 19:45:53.161578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.590 [2024-07-15 19:45:53.161613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.591 [2024-07-15 19:45:53.161670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.591 [2024-07-15 19:45:53.161678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.591 [2024-07-15 19:45:53.161682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.591 [2024-07-15 19:45:53.161699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7d1c00) 00:17:27.591 [2024-07-15 19:45:53.161715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.591 [2024-07-15 19:45:53.161722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7d1c00) 00:17:27.591 [2024-07-15 19:45:53.161735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.591 [2024-07-15 19:45:53.161742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7d1c00) 00:17:27.591 [2024-07-15 19:45:53.161755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.591 [2024-07-15 19:45:53.161761] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.591 [2024-07-15 19:45:53.161775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.591 [2024-07-15 19:45:53.161780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:27.591 [2024-07-15 19:45:53.161789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:27.591 [2024-07-15 19:45:53.161797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d1c00) 00:17:27.591 [2024-07-15 19:45:53.161807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.591 [2024-07-15 19:45:53.161858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 0, qid 0 00:17:27.591 [2024-07-15 19:45:53.161866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814b40, cid 1, qid 0 00:17:27.591 [2024-07-15 19:45:53.161871] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814cc0, cid 2, qid 0 00:17:27.591 [2024-07-15 19:45:53.161876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.591 [2024-07-15 19:45:53.161881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814fc0, cid 4, qid 0 00:17:27.591 [2024-07-15 19:45:53.161979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.591 [2024-07-15 19:45:53.161986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.591 [2024-07-15 19:45:53.161990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.161994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814fc0) on tqpair=0x7d1c00 00:17:27.591 [2024-07-15 19:45:53.162005] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:27.591 [2024-07-15 19:45:53.162011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:27.591 [2024-07-15 19:45:53.162021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:27.591 [2024-07-15 19:45:53.162028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:27.591 [2024-07-15 19:45:53.162035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.591 [2024-07-15 19:45:53.162040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d1c00) 00:17:27.592 [2024-07-15 19:45:53.162052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.592 [2024-07-15 19:45:53.162073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814fc0, cid 4, qid 0 00:17:27.592 [2024-07-15 19:45:53.162138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.592 [2024-07-15 19:45:53.162145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.592 [2024-07-15 19:45:53.162149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814fc0) on tqpair=0x7d1c00 00:17:27.592 [2024-07-15 19:45:53.162264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:27.592 [2024-07-15 19:45:53.162279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:27.592 [2024-07-15 19:45:53.162288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d1c00) 00:17:27.592 [2024-07-15 19:45:53.162300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.592 [2024-07-15 19:45:53.162324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814fc0, cid 4, qid 0 00:17:27.592 [2024-07-15 19:45:53.162394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.592 [2024-07-15 19:45:53.162402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.592 [2024-07-15 19:45:53.162405] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162409] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=4096, cccid=4 00:17:27.592 [2024-07-15 19:45:53.162414] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x814fc0) on tqpair(0x7d1c00): expected_datao=0, payload_size=4096 00:17:27.592 [2024-07-15 19:45:53.162419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162427] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162431] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.592 [2024-07-15 19:45:53.162446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.592 [2024-07-15 19:45:53.162449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814fc0) on tqpair=0x7d1c00 00:17:27.592 [2024-07-15 19:45:53.162466] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:27.592 [2024-07-15 19:45:53.162479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:27.592 [2024-07-15 19:45:53.162491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:27.592 [2024-07-15 19:45:53.162499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d1c00) 00:17:27.592 [2024-07-15 19:45:53.162511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.592 [2024-07-15 19:45:53.162548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814fc0, cid 4, qid 0 00:17:27.592 [2024-07-15 19:45:53.162646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.592 [2024-07-15 19:45:53.162653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.592 [2024-07-15 19:45:53.162656] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162660] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=4096, cccid=4 00:17:27.592 [2024-07-15 19:45:53.162664] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x814fc0) on tqpair(0x7d1c00): expected_datao=0, payload_size=4096 00:17:27.592 [2024-07-15 19:45:53.162669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162676] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.592 [2024-07-15 19:45:53.162680] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.593 [2024-07-15 19:45:53.162693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.593 [2024-07-15 19:45:53.162697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814fc0) on tqpair=0x7d1c00 00:17:27.593 [2024-07-15 19:45:53.162717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d1c00) 00:17:27.593 [2024-07-15 19:45:53.162748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.593 [2024-07-15 19:45:53.162770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814fc0, cid 4, qid 0 00:17:27.593 [2024-07-15 19:45:53.162834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.593 [2024-07-15 19:45:53.162841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.593 [2024-07-15 19:45:53.162845] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=4096, cccid=4 00:17:27.593 [2024-07-15 19:45:53.162853] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x814fc0) on tqpair(0x7d1c00): expected_datao=0, payload_size=4096 00:17:27.593 [2024-07-15 19:45:53.162857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162864] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162868] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.593 [2024-07-15 19:45:53.162882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.593 [2024-07-15 19:45:53.162885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.162889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814fc0) on tqpair=0x7d1c00 00:17:27.593 [2024-07-15 19:45:53.162914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162959] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:27.593 [2024-07-15 19:45:53.162964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:27.593 [2024-07-15 19:45:53.162969] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:27.593 [2024-07-15 19:45:53.163000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.163006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d1c00) 00:17:27.593 [2024-07-15 19:45:53.163014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.593 [2024-07-15 19:45:53.163022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.163026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.593 [2024-07-15 19:45:53.163029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d1c00) 00:17:27.593 [2024-07-15 19:45:53.163036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.593 [2024-07-15 19:45:53.163063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814fc0, cid 4, qid 0 00:17:27.594 [2024-07-15 19:45:53.163071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x815140, cid 5, qid 0 00:17:27.594 [2024-07-15 19:45:53.163143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.594 [2024-07-15 19:45:53.163194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.594 [2024-07-15 19:45:53.163200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814fc0) on tqpair=0x7d1c00 00:17:27.594 [2024-07-15 19:45:53.163212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.594 [2024-07-15 19:45:53.163218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.594 [2024-07-15 19:45:53.163222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x815140) on tqpair=0x7d1c00 00:17:27.594 [2024-07-15 19:45:53.163253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d1c00) 00:17:27.594 [2024-07-15 19:45:53.163265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.594 [2024-07-15 19:45:53.163287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x815140, cid 5, qid 0 00:17:27.594 [2024-07-15 19:45:53.163349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.594 [2024-07-15 19:45:53.163356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.594 [2024-07-15 19:45:53.163359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x815140) on tqpair=0x7d1c00 00:17:27.594 [2024-07-15 19:45:53.163374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d1c00) 00:17:27.594 [2024-07-15 19:45:53.163385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.594 [2024-07-15 19:45:53.163420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x815140, cid 5, qid 0 00:17:27.594 [2024-07-15 19:45:53.163477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.594 [2024-07-15 19:45:53.163484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.594 [2024-07-15 19:45:53.163488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x815140) on tqpair=0x7d1c00 00:17:27.594 [2024-07-15 19:45:53.163503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d1c00) 00:17:27.594 [2024-07-15 19:45:53.163530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.594 [2024-07-15 19:45:53.163549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x815140, cid 5, qid 0 00:17:27.594 [2024-07-15 19:45:53.163602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.594 [2024-07-15 19:45:53.163609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.594 [2024-07-15 19:45:53.163613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x815140) on tqpair=0x7d1c00 00:17:27.594 [2024-07-15 19:45:53.163637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7d1c00) 00:17:27.594 [2024-07-15 19:45:53.163650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.594 [2024-07-15 19:45:53.163658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7d1c00) 00:17:27.594 [2024-07-15 19:45:53.163669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.594 [2024-07-15 19:45:53.163677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.594 [2024-07-15 19:45:53.163681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7d1c00) 00:17:27.595 [2024-07-15 19:45:53.163687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.595 [2024-07-15 19:45:53.163696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.163700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7d1c00) 00:17:27.595 [2024-07-15 19:45:53.163706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.595 [2024-07-15 19:45:53.163728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x815140, cid 5, qid 0 00:17:27.595 [2024-07-15 19:45:53.163735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814fc0, cid 4, qid 0 00:17:27.595 [2024-07-15 19:45:53.163740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8152c0, cid 6, qid 0 00:17:27.595 [2024-07-15 19:45:53.163744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x815440, cid 7, qid 0 00:17:27.595 [2024-07-15 19:45:53.163902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.595 [2024-07-15 19:45:53.163909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.595 [2024-07-15 19:45:53.163913] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.163917] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=8192, cccid=5 00:17:27.595 [2024-07-15 19:45:53.163921] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x815140) on tqpair(0x7d1c00): expected_datao=0, payload_size=8192 00:17:27.595 [2024-07-15 19:45:53.163942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.163959] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.163964] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.163970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.595 [2024-07-15 19:45:53.163976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.595 [2024-07-15 19:45:53.163979] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.163983] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=512, cccid=4 00:17:27.595 [2024-07-15 19:45:53.163988] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x814fc0) on tqpair(0x7d1c00): expected_datao=0, payload_size=512 00:17:27.595 [2024-07-15 19:45:53.163993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.163999] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164003] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.595 [2024-07-15 19:45:53.164014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.595 [2024-07-15 19:45:53.164018] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164022] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=512, cccid=6 00:17:27.595 [2024-07-15 19:45:53.164026] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8152c0) on tqpair(0x7d1c00): expected_datao=0, payload_size=512 00:17:27.595 [2024-07-15 19:45:53.164031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164037] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164040] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:27.595 [2024-07-15 19:45:53.164052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:27.595 [2024-07-15 19:45:53.164055] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164059] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7d1c00): datao=0, datal=4096, cccid=7 00:17:27.595 [2024-07-15 19:45:53.164063] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x815440) on tqpair(0x7d1c00): expected_datao=0, payload_size=4096 00:17:27.595 [2024-07-15 19:45:53.164068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164075] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164079] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:27.595 [2024-07-15 19:45:53.164087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.595 [2024-07-15 19:45:53.164093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.595 [2024-07-15 19:45:53.164096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.596 [2024-07-15 19:45:53.164100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x815140) on tqpair=0x7d1c00 00:17:27.596 [2024-07-15 19:45:53.164116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.596 [2024-07-15 19:45:53.164123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.596 [2024-07-15 19:45:53.164126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.596 [2024-07-15 19:45:53.164130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814fc0) on tqpair=0x7d1c00 00:17:27.596 [2024-07-15 19:45:53.164145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.596 [2024-07-15 19:45:53.164152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.596 [2024-07-15 19:45:53.164155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.596 [2024-07-15 19:45:53.164159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8152c0) on tqpair=0x7d1c00 00:17:27.596 [2024-07-15 19:45:53.164183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.596 ===================================================== 00:17:27.596 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.596 ===================================================== 00:17:27.596 Controller Capabilities/Features 00:17:27.596 ================================ 00:17:27.596 Vendor ID: 8086 00:17:27.596 Subsystem Vendor ID: 8086 00:17:27.596 Serial Number: SPDK00000000000001 00:17:27.596 Model Number: SPDK bdev Controller 00:17:27.596 Firmware Version: 24.09 00:17:27.596 Recommended Arb Burst: 6 00:17:27.596 IEEE OUI Identifier: e4 d2 5c 00:17:27.596 Multi-path I/O 00:17:27.596 May have multiple subsystem ports: Yes 00:17:27.596 May have multiple controllers: Yes 00:17:27.596 Associated with SR-IOV VF: No 00:17:27.596 Max Data Transfer Size: 131072 00:17:27.596 Max Number of Namespaces: 32 00:17:27.596 Max Number of I/O Queues: 127 00:17:27.596 NVMe Specification Version (VS): 1.3 00:17:27.596 NVMe Specification Version (Identify): 1.3 00:17:27.596 Maximum Queue Entries: 128 00:17:27.596 Contiguous Queues Required: Yes 00:17:27.596 Arbitration Mechanisms Supported 00:17:27.596 Weighted Round Robin: Not Supported 00:17:27.596 Vendor Specific: Not Supported 00:17:27.596 Reset Timeout: 15000 ms 00:17:27.596 Doorbell Stride: 4 bytes 00:17:27.596 NVM Subsystem Reset: Not Supported 00:17:27.596 Command Sets Supported 00:17:27.596 NVM Command Set: Supported 00:17:27.596 Boot Partition: Not Supported 00:17:27.596 Memory Page Size Minimum: 4096 bytes 00:17:27.596 Memory Page Size Maximum: 4096 bytes 00:17:27.596 Persistent Memory Region: Not Supported 00:17:27.596 Optional Asynchronous Events Supported 00:17:27.596 Namespace Attribute Notices: Supported 00:17:27.596 Firmware Activation Notices: Not Supported 00:17:27.596 ANA Change Notices: Not Supported 00:17:27.596 PLE Aggregate Log Change Notices: Not Supported 00:17:27.596 LBA Status Info Alert Notices: Not Supported 00:17:27.596 EGE Aggregate Log Change Notices: Not Supported 00:17:27.596 Normal NVM Subsystem Shutdown event: Not Supported 00:17:27.596 Zone Descriptor Change Notices: Not Supported 00:17:27.596 Discovery Log Change Notices: Not Supported 00:17:27.596 Controller Attributes 00:17:27.596 128-bit Host Identifier: Supported 00:17:27.596 Non-Operational Permissive Mode: Not Supported 00:17:27.597 NVM Sets: Not Supported 00:17:27.597 Read Recovery Levels: Not Supported 00:17:27.597 Endurance Groups: Not Supported 00:17:27.597 Predictable Latency Mode: Not Supported 00:17:27.597 Traffic Based Keep ALive: Not Supported 00:17:27.597 Namespace Granularity: Not Supported 00:17:27.597 SQ Associations: Not Supported 00:17:27.597 UUID List: Not Supported 00:17:27.597 Multi-Domain Subsystem: Not Supported 00:17:27.597 Fixed Capacity Management: Not Supported 00:17:27.597 Variable Capacity Management: Not Supported 00:17:27.597 Delete Endurance Group: Not Supported 00:17:27.597 Delete NVM Set: Not Supported 00:17:27.597 Extended LBA Formats Supported: Not Supported 00:17:27.597 Flexible Data Placement Supported: Not Supported 00:17:27.597 00:17:27.597 Controller Memory Buffer Support 00:17:27.597 ================================ 00:17:27.597 Supported: No 00:17:27.597 00:17:27.597 Persistent Memory Region Support 00:17:27.597 ================================ 00:17:27.597 Supported: No 00:17:27.597 00:17:27.597 Admin Command Set Attributes 00:17:27.597 ============================ 00:17:27.597 Security Send/Receive: Not Supported 00:17:27.597 Format NVM: Not Supported 00:17:27.597 Firmware Activate/Download: Not Supported 00:17:27.597 Namespace Management: Not Supported 00:17:27.597 Device Self-Test: Not Supported 00:17:27.597 Directives: Not Supported 00:17:27.597 NVMe-MI: Not Supported 00:17:27.597 Virtualization Management: Not Supported 00:17:27.597 Doorbell Buffer Config: Not Supported 00:17:27.597 Get LBA Status Capability: Not Supported 00:17:27.597 Command & Feature Lockdown Capability: Not Supported 00:17:27.597 Abort Command Limit: 4 00:17:27.597 Async Event Request Limit: 4 00:17:27.597 Number of Firmware Slots: N/A 00:17:27.597 Firmware Slot 1 Read-Only: N/A 00:17:27.597 Firmware Activation Without Reset: [2024-07-15 19:45:53.164189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.597 [2024-07-15 19:45:53.164193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.597 [2024-07-15 19:45:53.164197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x815440) on tqpair=0x7d1c00 00:17:27.597 N/A 00:17:27.597 Multiple Update Detection Support: N/A 00:17:27.597 Firmware Update Granularity: No Information Provided 00:17:27.597 Per-Namespace SMART Log: No 00:17:27.597 Asymmetric Namespace Access Log Page: Not Supported 00:17:27.597 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:27.597 Command Effects Log Page: Supported 00:17:27.597 Get Log Page Extended Data: Supported 00:17:27.597 Telemetry Log Pages: Not Supported 00:17:27.598 Persistent Event Log Pages: Not Supported 00:17:27.598 Supported Log Pages Log Page: May Support 00:17:27.598 Commands Supported & Effects Log Page: Not Supported 00:17:27.598 Feature Identifiers & Effects Log Page:May Support 00:17:27.598 NVMe-MI Commands & Effects Log Page: May Support 00:17:27.598 Data Area 4 for Telemetry Log: Not Supported 00:17:27.598 Error Log Page Entries Supported: 128 00:17:27.598 Keep Alive: Supported 00:17:27.598 Keep Alive Granularity: 10000 ms 00:17:27.598 00:17:27.598 NVM Command Set Attributes 00:17:27.598 ========================== 00:17:27.598 Submission Queue Entry Size 00:17:27.598 Max: 64 00:17:27.598 Min: 64 00:17:27.598 Completion Queue Entry Size 00:17:27.598 Max: 16 00:17:27.598 Min: 16 00:17:27.598 Number of Namespaces: 32 00:17:27.598 Compare Command: Supported 00:17:27.598 Write Uncorrectable Command: Not Supported 00:17:27.598 Dataset Management Command: Supported 00:17:27.598 Write Zeroes Command: Supported 00:17:27.598 Set Features Save Field: Not Supported 00:17:27.598 Reservations: Supported 00:17:27.598 Timestamp: Not Supported 00:17:27.598 Copy: Supported 00:17:27.598 Volatile Write Cache: Present 00:17:27.598 Atomic Write Unit (Normal): 1 00:17:27.598 Atomic Write Unit (PFail): 1 00:17:27.598 Atomic Compare & Write Unit: 1 00:17:27.598 Fused Compare & Write: Supported 00:17:27.598 Scatter-Gather List 00:17:27.598 SGL Command Set: Supported 00:17:27.598 SGL Keyed: Supported 00:17:27.598 SGL Bit Bucket Descriptor: Not Supported 00:17:27.598 SGL Metadata Pointer: Not Supported 00:17:27.598 Oversized SGL: Not Supported 00:17:27.598 SGL Metadata Address: Not Supported 00:17:27.598 SGL Offset: Supported 00:17:27.598 Transport SGL Data Block: Not Supported 00:17:27.598 Replay Protected Memory Block: Not Supported 00:17:27.598 00:17:27.598 Firmware Slot Information 00:17:27.598 ========================= 00:17:27.598 Active slot: 1 00:17:27.598 Slot 1 Firmware Revision: 24.09 00:17:27.598 00:17:27.598 00:17:27.598 Commands Supported and Effects 00:17:27.598 ============================== 00:17:27.598 Admin Commands 00:17:27.598 -------------- 00:17:27.598 Get Log Page (02h): Supported 00:17:27.598 Identify (06h): Supported 00:17:27.598 Abort (08h): Supported 00:17:27.598 Set Features (09h): Supported 00:17:27.598 Get Features (0Ah): Supported 00:17:27.599 Asynchronous Event Request (0Ch): Supported 00:17:27.599 Keep Alive (18h): Supported 00:17:27.599 I/O Commands 00:17:27.599 ------------ 00:17:27.599 Flush (00h): Supported LBA-Change 00:17:27.599 Write (01h): Supported LBA-Change 00:17:27.599 Read (02h): Supported 00:17:27.599 Compare (05h): Supported 00:17:27.599 Write Zeroes (08h): Supported LBA-Change 00:17:27.599 Dataset Management (09h): Supported LBA-Change 00:17:27.599 Copy (19h): Supported LBA-Change 00:17:27.599 00:17:27.599 Error Log 00:17:27.599 ========= 00:17:27.599 00:17:27.599 Arbitration 00:17:27.599 =========== 00:17:27.599 Arbitration Burst: 1 00:17:27.599 00:17:27.599 Power Management 00:17:27.599 ================ 00:17:27.599 Number of Power States: 1 00:17:27.599 Current Power State: Power State #0 00:17:27.599 Power State #0: 00:17:27.599 Max Power: 0.00 W 00:17:27.599 Non-Operational State: Operational 00:17:27.599 Entry Latency: Not Reported 00:17:27.599 Exit Latency: Not Reported 00:17:27.599 Relative Read Throughput: 0 00:17:27.599 Relative Read Latency: 0 00:17:27.599 Relative Write Throughput: 0 00:17:27.599 Relative Write Latency: 0 00:17:27.599 Idle Power: Not Reported 00:17:27.599 Active Power: Not Reported 00:17:27.599 Non-Operational Permissive Mode: Not Supported 00:17:27.599 00:17:27.599 Health Information 00:17:27.599 ================== 00:17:27.599 Critical Warnings: 00:17:27.599 Available Spare Space: OK 00:17:27.599 Temperature: OK 00:17:27.599 Device Reliability: OK 00:17:27.599 Read Only: No 00:17:27.599 Volatile Memory Backup: OK 00:17:27.599 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:27.599 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:27.599 Available Spare: 0% 00:17:27.599 Available Spare Threshold: 0% 00:17:27.599 Life Percentage Used:[2024-07-15 19:45:53.168309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.599 [2024-07-15 19:45:53.168321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7d1c00) 00:17:27.599 [2024-07-15 19:45:53.168331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.599 [2024-07-15 19:45:53.168361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x815440, cid 7, qid 0 00:17:27.599 [2024-07-15 19:45:53.168435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.599 [2024-07-15 19:45:53.168442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.599 [2024-07-15 19:45:53.168446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.599 [2024-07-15 19:45:53.168450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x815440) on tqpair=0x7d1c00 00:17:27.599 [2024-07-15 19:45:53.168501] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:27.600 [2024-07-15 19:45:53.168512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7d1c00 00:17:27.600 [2024-07-15 19:45:53.168519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.600 [2024-07-15 19:45:53.168525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814b40) on tqpair=0x7d1c00 00:17:27.600 [2024-07-15 19:45:53.168529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.600 [2024-07-15 19:45:53.168534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814cc0) on tqpair=0x7d1c00 00:17:27.600 [2024-07-15 19:45:53.168539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.600 [2024-07-15 19:45:53.168544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.600 [2024-07-15 19:45:53.168548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.600 [2024-07-15 19:45:53.168558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.600 [2024-07-15 19:45:53.168563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.600 [2024-07-15 19:45:53.168566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.600 [2024-07-15 19:45:53.168574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.600 [2024-07-15 19:45:53.168597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.600 [2024-07-15 19:45:53.168669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.600 [2024-07-15 19:45:53.168676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.600 [2024-07-15 19:45:53.168680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.600 [2024-07-15 19:45:53.168684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.600 [2024-07-15 19:45:53.168693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.600 [2024-07-15 19:45:53.168697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.600 [2024-07-15 19:45:53.168701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.601 [2024-07-15 19:45:53.168708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.601 [2024-07-15 19:45:53.168732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.601 [2024-07-15 19:45:53.168810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.601 [2024-07-15 19:45:53.168817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.601 [2024-07-15 19:45:53.168821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.168825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.601 [2024-07-15 19:45:53.168830] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:27.601 [2024-07-15 19:45:53.168836] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:27.601 [2024-07-15 19:45:53.168846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.168851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.168854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.601 [2024-07-15 19:45:53.168862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.601 [2024-07-15 19:45:53.168881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.601 [2024-07-15 19:45:53.168935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.601 [2024-07-15 19:45:53.168942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.601 [2024-07-15 19:45:53.168946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.168950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.601 [2024-07-15 19:45:53.168961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.168966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.168970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.601 [2024-07-15 19:45:53.168977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.601 [2024-07-15 19:45:53.169011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.601 [2024-07-15 19:45:53.169071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.601 [2024-07-15 19:45:53.169077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.601 [2024-07-15 19:45:53.169081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.169085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.601 [2024-07-15 19:45:53.169095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.169100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.169103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.601 [2024-07-15 19:45:53.169111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.601 [2024-07-15 19:45:53.169129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.601 [2024-07-15 19:45:53.169181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.601 [2024-07-15 19:45:53.169201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.601 [2024-07-15 19:45:53.169206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.601 [2024-07-15 19:45:53.169210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.601 [2024-07-15 19:45:53.169221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.602 [2024-07-15 19:45:53.169237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.602 [2024-07-15 19:45:53.169258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.602 [2024-07-15 19:45:53.169315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.602 [2024-07-15 19:45:53.169321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.602 [2024-07-15 19:45:53.169325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.602 [2024-07-15 19:45:53.169339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.602 [2024-07-15 19:45:53.169354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.602 [2024-07-15 19:45:53.169374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.602 [2024-07-15 19:45:53.169437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.602 [2024-07-15 19:45:53.169445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.602 [2024-07-15 19:45:53.169448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.602 [2024-07-15 19:45:53.169462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.602 [2024-07-15 19:45:53.169477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.602 [2024-07-15 19:45:53.169497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.602 [2024-07-15 19:45:53.169546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.602 [2024-07-15 19:45:53.169553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.602 [2024-07-15 19:45:53.169557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.602 [2024-07-15 19:45:53.169571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.602 [2024-07-15 19:45:53.169579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.602 [2024-07-15 19:45:53.169586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.602 [2024-07-15 19:45:53.169605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.602 [2024-07-15 19:45:53.169656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.602 [2024-07-15 19:45:53.169662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.602 [2024-07-15 19:45:53.169666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.169680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.169695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.169714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.603 [2024-07-15 19:45:53.169766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.603 [2024-07-15 19:45:53.169773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.603 [2024-07-15 19:45:53.169776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.169790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.169805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.169852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.603 [2024-07-15 19:45:53.169910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.603 [2024-07-15 19:45:53.169917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.603 [2024-07-15 19:45:53.169921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.169936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.169946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.169953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.169974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.603 [2024-07-15 19:45:53.170025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.603 [2024-07-15 19:45:53.170033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.603 [2024-07-15 19:45:53.170036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.170051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.170067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.170087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.603 [2024-07-15 19:45:53.170143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.603 [2024-07-15 19:45:53.170150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.603 [2024-07-15 19:45:53.170168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.170183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.170212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.170234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.603 [2024-07-15 19:45:53.170286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.603 [2024-07-15 19:45:53.170308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.603 [2024-07-15 19:45:53.170312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.170326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.170341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.170361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.603 [2024-07-15 19:45:53.170415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.603 [2024-07-15 19:45:53.170427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.603 [2024-07-15 19:45:53.170431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.170446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.170462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.170481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.603 [2024-07-15 19:45:53.170530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.603 [2024-07-15 19:45:53.170537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.603 [2024-07-15 19:45:53.170541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.603 [2024-07-15 19:45:53.170555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.603 [2024-07-15 19:45:53.170563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.603 [2024-07-15 19:45:53.170570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.603 [2024-07-15 19:45:53.170589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.170641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.170648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.170652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.170666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.170681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.170700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.170754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.170761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.170764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.170779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.170794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.170813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.170862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.170869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.170872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.170886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.170901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.170920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.170974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.170985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.170989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.170993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.171003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.171019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.171039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.171089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.171096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.171099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.171113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.171129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.171148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.171231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.171240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.171244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.171275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.171291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.171313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.171366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.171374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.171378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.171394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.171410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.171431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.171487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.171504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.171518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.171533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.171550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.171572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.171639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.171652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.171672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.171687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.171703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.604 [2024-07-15 19:45:53.171723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.604 [2024-07-15 19:45:53.171792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.604 [2024-07-15 19:45:53.171799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.604 [2024-07-15 19:45:53.171802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.604 [2024-07-15 19:45:53.171817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.604 [2024-07-15 19:45:53.171826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.604 [2024-07-15 19:45:53.171833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.605 [2024-07-15 19:45:53.171853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.605 [2024-07-15 19:45:53.171904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.605 [2024-07-15 19:45:53.171911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.605 [2024-07-15 19:45:53.171915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.171919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.605 [2024-07-15 19:45:53.171929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.171934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.171938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.605 [2024-07-15 19:45:53.171945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.605 [2024-07-15 19:45:53.171964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.605 [2024-07-15 19:45:53.172018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.605 [2024-07-15 19:45:53.172029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.605 [2024-07-15 19:45:53.172033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.172038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.605 [2024-07-15 19:45:53.172048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.172053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.172057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.605 [2024-07-15 19:45:53.172065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.605 [2024-07-15 19:45:53.172088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.605 [2024-07-15 19:45:53.172162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.605 [2024-07-15 19:45:53.172169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.605 [2024-07-15 19:45:53.176236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.176259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.605 [2024-07-15 19:45:53.176274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.176285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.176289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7d1c00) 00:17:27.605 [2024-07-15 19:45:53.176297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:27.605 [2024-07-15 19:45:53.176323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 3, qid 0 00:17:27.605 [2024-07-15 19:45:53.176386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:27.605 [2024-07-15 19:45:53.176393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:27.605 [2024-07-15 19:45:53.176397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:27.605 [2024-07-15 19:45:53.176401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7d1c00 00:17:27.605 [2024-07-15 19:45:53.176409] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:27.605 0% 00:17:27.605 Data Units Read: 0 00:17:27.605 Data Units Written: 0 00:17:27.605 Host Read Commands: 0 00:17:27.605 Host Write Commands: 0 00:17:27.605 Controller Busy Time: 0 minutes 00:17:27.605 Power Cycles: 0 00:17:27.605 Power On Hours: 0 hours 00:17:27.605 Unsafe Shutdowns: 0 00:17:27.605 Unrecoverable Media Errors: 0 00:17:27.605 Lifetime Error Log Entries: 0 00:17:27.605 Warning Temperature Time: 0 minutes 00:17:27.605 Critical Temperature Time: 0 minutes 00:17:27.605 00:17:27.605 Number of Queues 00:17:27.605 ================ 00:17:27.605 Number of I/O Submission Queues: 127 00:17:27.605 Number of I/O Completion Queues: 127 00:17:27.605 00:17:27.605 Active Namespaces 00:17:27.605 ================= 00:17:27.605 Namespace ID:1 00:17:27.605 Error Recovery Timeout: Unlimited 00:17:27.605 Command Set Identifier: NVM (00h) 00:17:27.605 Deallocate: Supported 00:17:27.605 Deallocated/Unwritten Error: Not Supported 00:17:27.605 Deallocated Read Value: Unknown 00:17:27.605 Deallocate in Write Zeroes: Not Supported 00:17:27.605 Deallocated Guard Field: 0xFFFF 00:17:27.605 Flush: Supported 00:17:27.605 Reservation: Supported 00:17:27.605 Namespace Sharing Capabilities: Multiple Controllers 00:17:27.605 Size (in LBAs): 131072 (0GiB) 00:17:27.605 Capacity (in LBAs): 131072 (0GiB) 00:17:27.605 Utilization (in LBAs): 131072 (0GiB) 00:17:27.605 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:27.605 EUI64: ABCDEF0123456789 00:17:27.605 UUID: 095acf01-2ede-4f5c-870d-5e97e3cf3a05 00:17:27.605 Thin Provisioning: Not Supported 00:17:27.605 Per-NS Atomic Units: Yes 00:17:27.605 Atomic Boundary Size (Normal): 0 00:17:27.605 Atomic Boundary Size (PFail): 0 00:17:27.605 Atomic Boundary Offset: 0 00:17:27.605 Maximum Single Source Range Length: 65535 00:17:27.605 Maximum Copy Length: 65535 00:17:27.605 Maximum Source Range Count: 1 00:17:27.605 NGUID/EUI64 Never Reused: No 00:17:27.605 Namespace Write Protected: No 00:17:27.605 Number of LBA Formats: 1 00:17:27.605 Current LBA Format: LBA Format #00 00:17:27.605 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:27.605 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:27.605 rmmod nvme_tcp 00:17:27.605 rmmod nvme_fabrics 00:17:27.605 rmmod nvme_keyring 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86990 ']' 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86990 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86990 ']' 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86990 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86990 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:27.605 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:27.606 killing process with pid 86990 00:17:27.606 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86990' 00:17:27.606 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86990 00:17:27.606 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86990 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:27.863 ************************************ 00:17:27.863 END TEST nvmf_identify 00:17:27.863 ************************************ 00:17:27.863 00:17:27.863 real 0m2.640s 00:17:27.863 user 0m7.299s 00:17:27.863 sys 0m0.691s 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.863 19:45:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.120 19:45:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:28.120 19:45:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:28.120 19:45:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.120 19:45:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.120 19:45:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.120 ************************************ 00:17:28.120 START TEST nvmf_perf 00:17:28.120 ************************************ 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:28.120 * Looking for test storage... 00:17:28.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.120 19:45:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:28.121 Cannot find device "nvmf_tgt_br" 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.121 Cannot find device "nvmf_tgt_br2" 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:28.121 Cannot find device "nvmf_tgt_br" 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:28.121 Cannot find device "nvmf_tgt_br2" 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:28.121 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:28.378 19:45:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:28.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:28.378 00:17:28.378 --- 10.0.0.2 ping statistics --- 00:17:28.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.378 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:28.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:28.378 00:17:28.378 --- 10.0.0.3 ping statistics --- 00:17:28.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.378 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:28.378 00:17:28.378 --- 10.0.0.1 ping statistics --- 00:17:28.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.378 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=87209 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 87209 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 87209 ']' 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.378 19:45:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:28.635 [2024-07-15 19:45:54.187648] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:28.635 [2024-07-15 19:45:54.187745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.635 [2024-07-15 19:45:54.326202] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.893 [2024-07-15 19:45:54.459641] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.893 [2024-07-15 19:45:54.459718] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.893 [2024-07-15 19:45:54.459745] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.893 [2024-07-15 19:45:54.459756] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.893 [2024-07-15 19:45:54.459765] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.893 [2024-07-15 19:45:54.460096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.893 [2024-07-15 19:45:54.460242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.893 [2024-07-15 19:45:54.460827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.893 [2024-07-15 19:45:54.460866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.457 19:45:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.457 19:45:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:29.457 19:45:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.457 19:45:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.457 19:45:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:29.716 19:45:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.716 19:45:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:29.716 19:45:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:29.974 19:45:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:29.974 19:45:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:30.232 19:45:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:30.232 19:45:55 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:30.850 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:30.850 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:30.850 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:30.850 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:30.850 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:30.850 [2024-07-15 19:45:56.545126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.850 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.108 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:31.108 19:45:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.366 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:31.366 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:31.624 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.882 [2024-07-15 19:45:57.538314] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.882 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:32.140 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:32.140 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:32.140 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:32.140 19:45:57 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:33.515 Initializing NVMe Controllers 00:17:33.515 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:33.515 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:33.515 Initialization complete. Launching workers. 00:17:33.515 ======================================================== 00:17:33.515 Latency(us) 00:17:33.515 Device Information : IOPS MiB/s Average min max 00:17:33.515 PCIE (0000:00:10.0) NSID 1 from core 0: 23160.98 90.47 1381.07 364.07 7588.65 00:17:33.515 ======================================================== 00:17:33.515 Total : 23160.98 90.47 1381.07 364.07 7588.65 00:17:33.515 00:17:33.515 19:45:58 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:34.448 Initializing NVMe Controllers 00:17:34.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:34.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:34.448 Initialization complete. Launching workers. 00:17:34.448 ======================================================== 00:17:34.448 Latency(us) 00:17:34.448 Device Information : IOPS MiB/s Average min max 00:17:34.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3877.99 15.15 256.51 101.30 7135.41 00:17:34.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.00 0.48 8220.93 5971.63 11993.24 00:17:34.448 ======================================================== 00:17:34.448 Total : 3999.99 15.62 499.43 101.30 11993.24 00:17:34.448 00:17:34.705 19:46:00 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:36.079 Initializing NVMe Controllers 00:17:36.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:36.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:36.079 Initialization complete. Launching workers. 00:17:36.079 ======================================================== 00:17:36.079 Latency(us) 00:17:36.079 Device Information : IOPS MiB/s Average min max 00:17:36.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9354.74 36.54 3421.04 779.84 7786.28 00:17:36.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2707.27 10.58 11894.94 6258.36 22822.13 00:17:36.079 ======================================================== 00:17:36.079 Total : 12062.01 47.12 5322.97 779.84 22822.13 00:17:36.079 00:17:36.079 19:46:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:36.079 19:46:01 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:38.615 Initializing NVMe Controllers 00:17:38.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.615 Controller IO queue size 128, less than required. 00:17:38.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.615 Controller IO queue size 128, less than required. 00:17:38.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:38.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:38.615 Initialization complete. Launching workers. 00:17:38.615 ======================================================== 00:17:38.615 Latency(us) 00:17:38.615 Device Information : IOPS MiB/s Average min max 00:17:38.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1673.66 418.42 77858.85 44141.86 133018.85 00:17:38.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.04 143.76 229436.19 84614.29 348991.09 00:17:38.615 ======================================================== 00:17:38.615 Total : 2248.70 562.18 116620.35 44141.86 348991.09 00:17:38.615 00:17:38.615 19:46:04 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:38.615 Initializing NVMe Controllers 00:17:38.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.615 Controller IO queue size 128, less than required. 00:17:38.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.615 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:38.615 Controller IO queue size 128, less than required. 00:17:38.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.615 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:38.615 WARNING: Some requested NVMe devices were skipped 00:17:38.615 No valid NVMe controllers or AIO or URING devices found 00:17:38.615 19:46:04 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:41.199 Initializing NVMe Controllers 00:17:41.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.199 Controller IO queue size 128, less than required. 00:17:41.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.199 Controller IO queue size 128, less than required. 00:17:41.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:41.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:41.199 Initialization complete. Launching workers. 00:17:41.199 00:17:41.199 ==================== 00:17:41.199 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:41.199 TCP transport: 00:17:41.199 polls: 11612 00:17:41.199 idle_polls: 8622 00:17:41.199 sock_completions: 2990 00:17:41.199 nvme_completions: 5007 00:17:41.199 submitted_requests: 7500 00:17:41.199 queued_requests: 1 00:17:41.199 00:17:41.199 ==================== 00:17:41.199 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:41.199 TCP transport: 00:17:41.199 polls: 9569 00:17:41.199 idle_polls: 6563 00:17:41.199 sock_completions: 3006 00:17:41.199 nvme_completions: 6035 00:17:41.199 submitted_requests: 9040 00:17:41.199 queued_requests: 1 00:17:41.199 ======================================================== 00:17:41.199 Latency(us) 00:17:41.199 Device Information : IOPS MiB/s Average min max 00:17:41.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1249.50 312.37 104721.25 76915.07 178059.56 00:17:41.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1506.09 376.52 86103.58 48224.82 143028.79 00:17:41.199 ======================================================== 00:17:41.199 Total : 2755.58 688.90 94545.61 48224.82 178059.56 00:17:41.199 00:17:41.199 19:46:06 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:41.199 19:46:06 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.458 19:46:07 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:41.458 19:46:07 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:41.458 19:46:07 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:41.458 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.458 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.715 rmmod nvme_tcp 00:17:41.715 rmmod nvme_fabrics 00:17:41.715 rmmod nvme_keyring 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 87209 ']' 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 87209 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 87209 ']' 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 87209 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87209 00:17:41.715 killing process with pid 87209 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87209' 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 87209 00:17:41.715 19:46:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 87209 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:42.283 00:17:42.283 real 0m14.370s 00:17:42.283 user 0m53.083s 00:17:42.283 sys 0m3.651s 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.283 19:46:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:42.283 ************************************ 00:17:42.283 END TEST nvmf_perf 00:17:42.283 ************************************ 00:17:42.543 19:46:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:42.543 19:46:08 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:42.543 19:46:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:42.543 19:46:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.543 19:46:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.543 ************************************ 00:17:42.543 START TEST nvmf_fio_host 00:17:42.543 ************************************ 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:42.543 * Looking for test storage... 00:17:42.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.543 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:42.544 Cannot find device "nvmf_tgt_br" 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.544 Cannot find device "nvmf_tgt_br2" 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:42.544 Cannot find device "nvmf_tgt_br" 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:42.544 Cannot find device "nvmf_tgt_br2" 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:42.544 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:42.802 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:42.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:42.803 00:17:42.803 --- 10.0.0.2 ping statistics --- 00:17:42.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.803 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:42.803 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:42.803 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:42.803 00:17:42.803 --- 10.0.0.3 ping statistics --- 00:17:42.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.803 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:42.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:42.803 00:17:42.803 --- 10.0.0.1 ping statistics --- 00:17:42.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.803 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87690 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87690 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87690 ']' 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.803 19:46:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.061 [2024-07-15 19:46:08.611334] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:43.061 [2024-07-15 19:46:08.611434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.061 [2024-07-15 19:46:08.752709] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.319 [2024-07-15 19:46:08.872974] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.319 [2024-07-15 19:46:08.873247] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.319 [2024-07-15 19:46:08.873435] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.319 [2024-07-15 19:46:08.873550] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.319 [2024-07-15 19:46:08.873710] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.319 [2024-07-15 19:46:08.873974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.319 [2024-07-15 19:46:08.874056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.319 [2024-07-15 19:46:08.874107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.319 [2024-07-15 19:46:08.874109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.886 19:46:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.886 19:46:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:17:43.886 19:46:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.144 [2024-07-15 19:46:09.856817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.144 19:46:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:44.144 19:46:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.144 19:46:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.402 19:46:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:44.661 Malloc1 00:17:44.661 19:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.920 19:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:44.920 19:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.178 [2024-07-15 19:46:10.865294] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.178 19:46:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:45.436 19:46:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:45.695 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:45.695 fio-3.35 00:17:45.695 Starting 1 thread 00:17:48.219 00:17:48.219 test: (groupid=0, jobs=1): err= 0: pid=87816: Mon Jul 15 19:46:13 2024 00:17:48.219 read: IOPS=9118, BW=35.6MiB/s (37.3MB/s)(71.5MiB/2007msec) 00:17:48.219 slat (nsec): min=1915, max=341711, avg=2522.68, stdev=3455.70 00:17:48.219 clat (usec): min=3240, max=12659, avg=7310.44, stdev=565.44 00:17:48.219 lat (usec): min=3286, max=12661, avg=7312.96, stdev=565.24 00:17:48.219 clat percentiles (usec): 00:17:48.219 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6849], 00:17:48.219 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7373], 00:17:48.219 | 70.00th=[ 7570], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:17:48.219 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[11338], 99.95th=[11600], 00:17:48.219 | 99.99th=[12256] 00:17:48.219 bw ( KiB/s): min=34746, max=37928, per=99.94%, avg=36450.50, stdev=1308.61, samples=4 00:17:48.219 iops : min= 8686, max= 9482, avg=9112.50, stdev=327.37, samples=4 00:17:48.219 write: IOPS=9129, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2007msec); 0 zone resets 00:17:48.219 slat (nsec): min=1979, max=269853, avg=2621.55, stdev=2594.99 00:17:48.219 clat (usec): min=2478, max=12616, avg=6650.70, stdev=521.10 00:17:48.219 lat (usec): min=2493, max=12618, avg=6653.32, stdev=520.97 00:17:48.219 clat percentiles (usec): 00:17:48.219 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6259], 00:17:48.219 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6718], 00:17:48.219 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7439], 00:17:48.219 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[11469], 99.95th=[11994], 00:17:48.219 | 99.99th=[12387] 00:17:48.219 bw ( KiB/s): min=35552, max=37280, per=99.97%, avg=36506.00, stdev=715.76, samples=4 00:17:48.219 iops : min= 8888, max= 9320, avg=9126.50, stdev=178.94, samples=4 00:17:48.219 lat (msec) : 4=0.07%, 10=99.72%, 20=0.21% 00:17:48.219 cpu : usr=66.20%, sys=24.68%, ctx=21, majf=0, minf=7 00:17:48.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:48.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.219 issued rwts: total=18300,18323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.219 00:17:48.219 Run status group 0 (all jobs): 00:17:48.219 READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.5MiB (75.0MB), run=2007-2007msec 00:17:48.219 WRITE: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2007-2007msec 00:17:48.219 19:46:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:48.219 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:48.219 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:48.219 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:48.219 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:48.219 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:48.220 19:46:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:48.220 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:48.220 fio-3.35 00:17:48.220 Starting 1 thread 00:17:50.746 00:17:50.746 test: (groupid=0, jobs=1): err= 0: pid=87865: Mon Jul 15 19:46:16 2024 00:17:50.746 read: IOPS=8324, BW=130MiB/s (136MB/s)(261MiB/2004msec) 00:17:50.746 slat (usec): min=2, max=310, avg= 3.63, stdev= 3.23 00:17:50.746 clat (usec): min=2454, max=17361, avg=9095.47, stdev=2163.65 00:17:50.746 lat (usec): min=2458, max=17365, avg=9099.10, stdev=2163.66 00:17:50.746 clat percentiles (usec): 00:17:50.746 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7177], 00:17:50.746 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9634], 00:17:50.746 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11600], 95.00th=[12649], 00:17:50.746 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16909], 99.95th=[16909], 00:17:50.746 | 99.99th=[17433] 00:17:50.746 bw ( KiB/s): min=59200, max=76608, per=50.57%, avg=67352.00, stdev=9073.17, samples=4 00:17:50.746 iops : min= 3700, max= 4788, avg=4209.50, stdev=567.07, samples=4 00:17:50.746 write: IOPS=4997, BW=78.1MiB/s (81.9MB/s)(138MiB/1770msec); 0 zone resets 00:17:50.746 slat (usec): min=31, max=355, avg=37.39, stdev= 8.99 00:17:50.746 clat (usec): min=3430, max=19581, avg=11141.85, stdev=1904.88 00:17:50.746 lat (usec): min=3463, max=19615, avg=11179.24, stdev=1904.79 00:17:50.746 clat percentiles (usec): 00:17:50.746 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:17:50.746 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:17:50.746 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13698], 95.00th=[14615], 00:17:50.746 | 99.00th=[16319], 99.50th=[17171], 99.90th=[18744], 99.95th=[19006], 00:17:50.746 | 99.99th=[19530] 00:17:50.746 bw ( KiB/s): min=61280, max=79456, per=88.05%, avg=70408.00, stdev=9346.53, samples=4 00:17:50.746 iops : min= 3830, max= 4966, avg=4400.50, stdev=584.16, samples=4 00:17:50.746 lat (msec) : 4=0.19%, 10=51.75%, 20=48.06% 00:17:50.746 cpu : usr=74.10%, sys=17.07%, ctx=4, majf=0, minf=24 00:17:50.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:50.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:50.746 issued rwts: total=16683,8846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:50.746 00:17:50.746 Run status group 0 (all jobs): 00:17:50.746 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=261MiB (273MB), run=2004-2004msec 00:17:50.746 WRITE: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=138MiB (145MB), run=1770-1770msec 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.746 rmmod nvme_tcp 00:17:50.746 rmmod nvme_fabrics 00:17:50.746 rmmod nvme_keyring 00:17:50.746 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87690 ']' 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87690 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87690 ']' 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87690 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87690 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:50.747 killing process with pid 87690 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87690' 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87690 00:17:50.747 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87690 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.003 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.260 19:46:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:51.260 ************************************ 00:17:51.260 END TEST nvmf_fio_host 00:17:51.260 ************************************ 00:17:51.260 00:17:51.260 real 0m8.689s 00:17:51.260 user 0m35.518s 00:17:51.260 sys 0m2.248s 00:17:51.260 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.260 19:46:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.260 19:46:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:51.260 19:46:16 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:51.260 19:46:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.260 19:46:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.260 19:46:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.260 ************************************ 00:17:51.260 START TEST nvmf_failover 00:17:51.260 ************************************ 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:51.260 * Looking for test storage... 00:17:51.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:51.260 Cannot find device "nvmf_tgt_br" 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.260 Cannot find device "nvmf_tgt_br2" 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:51.260 19:46:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:51.260 Cannot find device "nvmf_tgt_br" 00:17:51.260 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:17:51.260 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:51.260 Cannot find device "nvmf_tgt_br2" 00:17:51.260 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:51.260 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:51.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:51.517 00:17:51.517 --- 10.0.0.2 ping statistics --- 00:17:51.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.517 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:51.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:51.517 00:17:51.517 --- 10.0.0.3 ping statistics --- 00:17:51.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.517 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:51.517 00:17:51.517 --- 10.0.0.1 ping statistics --- 00:17:51.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.517 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.517 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=88084 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 88084 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88084 ']' 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.774 19:46:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:51.774 [2024-07-15 19:46:17.362695] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:17:51.774 [2024-07-15 19:46:17.363013] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.774 [2024-07-15 19:46:17.503626] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.031 [2024-07-15 19:46:17.608485] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.031 [2024-07-15 19:46:17.608543] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.031 [2024-07-15 19:46:17.608570] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.031 [2024-07-15 19:46:17.608577] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.031 [2024-07-15 19:46:17.608583] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.031 [2024-07-15 19:46:17.608727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.031 [2024-07-15 19:46:17.609480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.031 [2024-07-15 19:46:17.609489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.597 19:46:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.597 19:46:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:52.597 19:46:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.597 19:46:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.597 19:46:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:52.597 19:46:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.597 19:46:18 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:52.854 [2024-07-15 19:46:18.542184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.854 19:46:18 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:53.112 Malloc0 00:17:53.112 19:46:18 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.369 19:46:19 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.627 19:46:19 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.885 [2024-07-15 19:46:19.592695] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.885 19:46:19 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:54.206 [2024-07-15 19:46:19.812815] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:54.206 19:46:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:54.464 [2024-07-15 19:46:20.033034] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:54.464 19:46:20 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88200 00:17:54.464 19:46:20 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:54.464 19:46:20 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.465 19:46:20 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88200 /var/tmp/bdevperf.sock 00:17:54.465 19:46:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88200 ']' 00:17:54.465 19:46:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.465 19:46:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.465 19:46:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.465 19:46:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.465 19:46:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:55.398 19:46:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.398 19:46:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:55.398 19:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.657 NVMe0n1 00:17:55.657 19:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.916 00:17:55.916 19:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.916 19:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88243 00:17:55.916 19:46:21 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:57.293 19:46:22 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.293 [2024-07-15 19:46:22.940077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.293 [2024-07-15 19:46:22.940347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 [2024-07-15 19:46:22.940356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 [2024-07-15 19:46:22.940364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 [2024-07-15 19:46:22.940372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 [2024-07-15 19:46:22.940380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 [2024-07-15 19:46:22.940389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 [2024-07-15 19:46:22.940398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 [2024-07-15 19:46:22.940406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e2ff0 is same with the state(5) to be set 00:17:57.294 19:46:22 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:00.575 19:46:25 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.575 00:18:00.575 19:46:26 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:00.833 [2024-07-15 19:46:26.519350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 [2024-07-15 19:46:26.519917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3f20 is same with the state(5) to be set 00:18:00.833 19:46:26 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:04.114 19:46:29 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.114 [2024-07-15 19:46:29.779333] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.114 19:46:29 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:05.045 19:46:30 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:05.304 [2024-07-15 19:46:31.061725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.061994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.304 [2024-07-15 19:46:31.062447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e4ec0 is same with the state(5) to be set 00:18:05.562 19:46:31 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88243 00:18:12.146 0 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 88200 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88200 ']' 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88200 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88200 00:18:12.146 killing process with pid 88200 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88200' 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88200 00:18:12.146 19:46:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88200 00:18:12.146 19:46:37 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:12.146 [2024-07-15 19:46:20.097034] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:18:12.146 [2024-07-15 19:46:20.097168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88200 ] 00:18:12.146 [2024-07-15 19:46:20.234208] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.146 [2024-07-15 19:46:20.353035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.146 Running I/O for 15 seconds... 00:18:12.146 [2024-07-15 19:46:22.941731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.146 [2024-07-15 19:46:22.941774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.941791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.146 [2024-07-15 19:46:22.941805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.941819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.146 [2024-07-15 19:46:22.941833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.941877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.146 [2024-07-15 19:46:22.941892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.941906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5fd0 is same with the state(5) to be set 00:18:12.146 [2024-07-15 19:46:22.941982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.146 [2024-07-15 19:46:22.942608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.146 [2024-07-15 19:46:22.942786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.146 [2024-07-15 19:46:22.942801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.942815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.942830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.942844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.942858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.942871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.942886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.942900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.942919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.942933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.942948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.942963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.942978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.942991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.943977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.943992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.944006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.944021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.944034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.944050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.944063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.944078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.944091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.944107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.944120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.147 [2024-07-15 19:46:22.944136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.147 [2024-07-15 19:46:22.944155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.944527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.944984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.944998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.148 [2024-07-15 19:46:22.945530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.148 [2024-07-15 19:46:22.945545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.148 [2024-07-15 19:46:22.945559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.945979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.945993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.946009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.946023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.946044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.946059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.946074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.946088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.946103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:22.946117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.946146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.149 [2024-07-15 19:46:22.946160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.149 [2024-07-15 19:46:22.946171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88280 len:8 PRP1 0x0 PRP2 0x0 00:18:12.149 [2024-07-15 19:46:22.946198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:22.946257] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe34d60 was disconnected and freed. reset controller. 00:18:12.149 [2024-07-15 19:46:22.946275] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:12.149 [2024-07-15 19:46:22.946306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.149 [2024-07-15 19:46:22.950183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:12.149 [2024-07-15 19:46:22.950220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc5fd0 (9): Bad file descriptor 00:18:12.149 [2024-07-15 19:46:22.988205] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:12.149 [2024-07-15 19:46:26.520342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.149 [2024-07-15 19:46:26.520870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.149 [2024-07-15 19:46:26.520899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.149 [2024-07-15 19:46:26.520928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.149 [2024-07-15 19:46:26.520957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.520972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.149 [2024-07-15 19:46:26.520985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.521000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.149 [2024-07-15 19:46:26.521014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.521029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.149 [2024-07-15 19:46:26.521042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.149 [2024-07-15 19:46:26.521061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.521980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.521995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.150 [2024-07-15 19:46:26.522250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.150 [2024-07-15 19:46:26.522264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.151 [2024-07-15 19:46:26.522587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.522977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.522990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.151 [2024-07-15 19:46:26.523635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.151 [2024-07-15 19:46:26.523649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.152 [2024-07-15 19:46:26.523974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.523989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:26.524513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.152 [2024-07-15 19:46:26.524569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.152 [2024-07-15 19:46:26.524581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101968 len:8 PRP1 0x0 PRP2 0x0 00:18:12.152 [2024-07-15 19:46:26.524610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524671] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfe1340 was disconnected and freed. reset controller. 00:18:12.152 [2024-07-15 19:46:26.524690] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:12.152 [2024-07-15 19:46:26.524743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.152 [2024-07-15 19:46:26.524763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.152 [2024-07-15 19:46:26.524791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.152 [2024-07-15 19:46:26.524817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.152 [2024-07-15 19:46:26.524850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:26.524863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.152 [2024-07-15 19:46:26.524911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc5fd0 (9): Bad file descriptor 00:18:12.152 [2024-07-15 19:46:26.528767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:12.152 [2024-07-15 19:46:26.560451] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:12.152 [2024-07-15 19:46:31.063073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:31.063229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:31.063275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:31.063292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:31.063309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:31.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:31.063338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:31.063353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:31.063369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:31.063383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:31.063423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.152 [2024-07-15 19:46:31.063438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.152 [2024-07-15 19:46:31.063454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.063975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.063988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.153 [2024-07-15 19:46:31.064517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.153 [2024-07-15 19:46:31.064772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.153 [2024-07-15 19:46:31.064787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.064800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.064816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.064829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.064844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.064857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.064872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.064885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.064900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.064914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.064928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.064942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.064956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.064976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.064991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.154 [2024-07-15 19:46:31.065394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.154 [2024-07-15 19:46:31.065409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.065469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.065499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.065528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.065559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.065589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.065618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.065971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.065985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.066015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.066044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.066074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.066103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.066134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.066177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.155 [2024-07-15 19:46:31.066220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.066252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.066290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.155 [2024-07-15 19:46:31.066305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.155 [2024-07-15 19:46:31.066320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.066969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.066985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:12.156 [2024-07-15 19:46:31.067232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.156 [2024-07-15 19:46:31.067283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68392 len:8 PRP1 0x0 PRP2 0x0 00:18:12.156 [2024-07-15 19:46:31.067297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:12.156 [2024-07-15 19:46:31.067327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:12.156 [2024-07-15 19:46:31.067338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68400 len:8 PRP1 0x0 PRP2 0x0 00:18:12.156 [2024-07-15 19:46:31.067351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067410] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe15020 was disconnected and freed. reset controller. 00:18:12.156 [2024-07-15 19:46:31.067429] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:12.156 [2024-07-15 19:46:31.067485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.156 [2024-07-15 19:46:31.067507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.156 [2024-07-15 19:46:31.067547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.156 [2024-07-15 19:46:31.067587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.156 [2024-07-15 19:46:31.067616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.156 [2024-07-15 19:46:31.067630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.156 [2024-07-15 19:46:31.067680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc5fd0 (9): Bad file descriptor 00:18:12.156 [2024-07-15 19:46:31.071458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:12.156 [2024-07-15 19:46:31.106828] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:12.156 00:18:12.156 Latency(us) 00:18:12.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.156 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:12.156 Verification LBA range: start 0x0 length 0x4000 00:18:12.156 NVMe0n1 : 15.00 9572.79 37.39 217.89 0.00 13044.76 577.16 21686.46 00:18:12.156 =================================================================================================================== 00:18:12.156 Total : 9572.79 37.39 217.89 0.00 13044.76 577.16 21686.46 00:18:12.156 Received shutdown signal, test time was about 15.000000 seconds 00:18:12.156 00:18:12.156 Latency(us) 00:18:12.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.156 =================================================================================================================== 00:18:12.156 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:12.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88446 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88446 /var/tmp/bdevperf.sock 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88446 ']' 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.157 19:46:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:12.414 19:46:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.414 19:46:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:12.414 19:46:38 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:12.672 [2024-07-15 19:46:38.318514] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:12.672 19:46:38 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:12.931 [2024-07-15 19:46:38.542738] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:12.931 19:46:38 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:13.189 NVMe0n1 00:18:13.189 19:46:38 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:13.447 00:18:13.447 19:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.023 00:18:14.023 19:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:14.023 19:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:14.023 19:46:39 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.589 19:46:40 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:17.869 19:46:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:17.869 19:46:43 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:17.869 19:46:43 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88585 00:18:17.869 19:46:43 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.869 19:46:43 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88585 00:18:18.802 0 00:18:18.802 19:46:44 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:18.802 [2024-07-15 19:46:37.093057] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:18:18.802 [2024-07-15 19:46:37.093139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88446 ] 00:18:18.802 [2024-07-15 19:46:37.223375] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.802 [2024-07-15 19:46:37.321813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.802 [2024-07-15 19:46:40.054732] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:18.802 [2024-07-15 19:46:40.054889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.802 [2024-07-15 19:46:40.054914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.802 [2024-07-15 19:46:40.054933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.802 [2024-07-15 19:46:40.054947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.802 [2024-07-15 19:46:40.054962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.802 [2024-07-15 19:46:40.054975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.802 [2024-07-15 19:46:40.054990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.802 [2024-07-15 19:46:40.055003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.802 [2024-07-15 19:46:40.055029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.802 [2024-07-15 19:46:40.055075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:18.802 [2024-07-15 19:46:40.055106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ffd0 (9): Bad file descriptor 00:18:18.802 [2024-07-15 19:46:40.061386] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:18.802 Running I/O for 1 seconds... 00:18:18.802 00:18:18.802 Latency(us) 00:18:18.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.802 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:18.802 Verification LBA range: start 0x0 length 0x4000 00:18:18.802 NVMe0n1 : 1.00 9491.84 37.08 0.00 0.00 13414.29 1832.03 14477.50 00:18:18.802 =================================================================================================================== 00:18:18.802 Total : 9491.84 37.08 0.00 0.00 13414.29 1832.03 14477.50 00:18:18.802 19:46:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:18.802 19:46:44 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:19.060 19:46:44 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:19.317 19:46:44 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:19.317 19:46:45 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:19.575 19:46:45 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:19.834 19:46:45 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88446 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88446 ']' 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88446 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88446 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:23.112 killing process with pid 88446 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88446' 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88446 00:18:23.112 19:46:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88446 00:18:23.370 19:46:48 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:23.370 19:46:49 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.629 rmmod nvme_tcp 00:18:23.629 rmmod nvme_fabrics 00:18:23.629 rmmod nvme_keyring 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 88084 ']' 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 88084 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88084 ']' 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88084 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88084 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:23.629 killing process with pid 88084 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88084' 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88084 00:18:23.629 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88084 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:23.887 00:18:23.887 real 0m32.802s 00:18:23.887 user 2m7.495s 00:18:23.887 sys 0m4.801s 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.887 19:46:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:23.887 ************************************ 00:18:23.887 END TEST nvmf_failover 00:18:23.887 ************************************ 00:18:24.144 19:46:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:24.144 19:46:49 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:24.144 19:46:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:24.144 19:46:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.144 19:46:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.145 ************************************ 00:18:24.145 START TEST nvmf_host_discovery 00:18:24.145 ************************************ 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:24.145 * Looking for test storage... 00:18:24.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:24.145 Cannot find device "nvmf_tgt_br" 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.145 Cannot find device "nvmf_tgt_br2" 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:24.145 Cannot find device "nvmf_tgt_br" 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:24.145 Cannot find device "nvmf_tgt_br2" 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:24.145 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:24.403 19:46:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:24.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:18:24.403 00:18:24.403 --- 10.0.0.2 ping statistics --- 00:18:24.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.403 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:24.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:18:24.403 00:18:24.403 --- 10.0.0.3 ping statistics --- 00:18:24.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.403 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:24.403 00:18:24.403 --- 10.0.0.1 ping statistics --- 00:18:24.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.403 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88889 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88889 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88889 ']' 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.403 19:46:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.661 [2024-07-15 19:46:50.208081] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:18:24.661 [2024-07-15 19:46:50.208213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.661 [2024-07-15 19:46:50.348170] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.918 [2024-07-15 19:46:50.455528] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.918 [2024-07-15 19:46:50.455597] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.918 [2024-07-15 19:46:50.455607] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.918 [2024-07-15 19:46:50.455615] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.918 [2024-07-15 19:46:50.455622] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.918 [2024-07-15 19:46:50.455659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.483 [2024-07-15 19:46:51.201276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.483 [2024-07-15 19:46:51.209378] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.483 null0 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.483 null1 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88939 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88939 /tmp/host.sock 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88939 ']' 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:25.483 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.483 19:46:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.741 [2024-07-15 19:46:51.299018] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:18:25.741 [2024-07-15 19:46:51.299703] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88939 ] 00:18:25.741 [2024-07-15 19:46:51.439244] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.999 [2024-07-15 19:46:51.563420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.600 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.600 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:26.600 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.601 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.859 [2024-07-15 19:46:52.625772] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:26.859 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:27.117 19:46:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:27.681 [2024-07-15 19:46:53.254266] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:27.681 [2024-07-15 19:46:53.254309] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:27.681 [2024-07-15 19:46:53.254344] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:27.681 [2024-07-15 19:46:53.341412] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:27.681 [2024-07-15 19:46:53.398387] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:27.681 [2024-07-15 19:46:53.398430] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:28.256 19:46:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:28.256 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.514 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.515 [2024-07-15 19:46:54.227349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:28.515 [2024-07-15 19:46:54.227884] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:28.515 [2024-07-15 19:46:54.227915] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.515 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:28.773 [2024-07-15 19:46:54.313948] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.773 [2024-07-15 19:46:54.377271] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:28.773 [2024-07-15 19:46:54.377301] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:28.773 [2024-07-15 19:46:54.377324] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:28.773 19:46:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:29.706 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.965 [2024-07-15 19:46:55.525252] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:29.965 [2024-07-15 19:46:55.525295] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:29.965 [2024-07-15 19:46:55.526012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.965 [2024-07-15 19:46:55.526055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.965 [2024-07-15 19:46:55.526069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.965 [2024-07-15 19:46:55.526079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.965 [2024-07-15 19:46:55.526090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.965 [2024-07-15 19:46:55.526100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.965 [2024-07-15 19:46:55.526110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.965 [2024-07-15 19:46:55.526120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.965 [2024-07-15 19:46:55.526129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.965 [2024-07-15 19:46:55.535963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.965 [2024-07-15 19:46:55.545981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:29.965 [2024-07-15 19:46:55.546111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.965 [2024-07-15 19:46:55.546134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9b20 with addr=10.0.0.2, port=4420 00:18:29.965 [2024-07-15 19:46:55.546146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.965 [2024-07-15 19:46:55.546177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.965 [2024-07-15 19:46:55.546195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:29.965 [2024-07-15 19:46:55.546205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:29.965 [2024-07-15 19:46:55.546216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:29.965 [2024-07-15 19:46:55.546243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.965 [2024-07-15 19:46:55.556054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:29.965 [2024-07-15 19:46:55.556196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.965 [2024-07-15 19:46:55.556218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9b20 with addr=10.0.0.2, port=4420 00:18:29.965 [2024-07-15 19:46:55.556229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.965 [2024-07-15 19:46:55.556245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.965 [2024-07-15 19:46:55.556270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:29.965 [2024-07-15 19:46:55.556280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:29.965 [2024-07-15 19:46:55.556289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:29.965 [2024-07-15 19:46:55.556303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.965 [2024-07-15 19:46:55.566125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:29.965 [2024-07-15 19:46:55.566219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.965 [2024-07-15 19:46:55.566239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9b20 with addr=10.0.0.2, port=4420 00:18:29.965 [2024-07-15 19:46:55.566250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.965 [2024-07-15 19:46:55.566266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.965 [2024-07-15 19:46:55.566290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:29.965 [2024-07-15 19:46:55.566300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:29.965 [2024-07-15 19:46:55.566309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:29.965 [2024-07-15 19:46:55.566324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.965 [2024-07-15 19:46:55.576197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:29.965 [2024-07-15 19:46:55.576339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.965 [2024-07-15 19:46:55.576361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9b20 with addr=10.0.0.2, port=4420 00:18:29.965 [2024-07-15 19:46:55.576372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.965 [2024-07-15 19:46:55.576400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.965 [2024-07-15 19:46:55.576416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:29.965 [2024-07-15 19:46:55.576425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:29.965 [2024-07-15 19:46:55.576435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:29.965 [2024-07-15 19:46:55.576450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.965 [2024-07-15 19:46:55.586297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:29.965 [2024-07-15 19:46:55.586388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.965 [2024-07-15 19:46:55.586408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9b20 with addr=10.0.0.2, port=4420 00:18:29.965 [2024-07-15 19:46:55.586419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.965 [2024-07-15 19:46:55.586435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.965 [2024-07-15 19:46:55.586450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:29.965 [2024-07-15 19:46:55.586459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:29.965 [2024-07-15 19:46:55.586467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:29.965 [2024-07-15 19:46:55.586481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:29.965 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:29.966 [2024-07-15 19:46:55.596349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:29.966 [2024-07-15 19:46:55.596439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.966 [2024-07-15 19:46:55.596460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9b20 with addr=10.0.0.2, port=4420 00:18:29.966 [2024-07-15 19:46:55.596471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.966 [2024-07-15 19:46:55.596487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.966 [2024-07-15 19:46:55.596502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:29.966 [2024-07-15 19:46:55.596511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:29.966 [2024-07-15 19:46:55.596520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:29.966 [2024-07-15 19:46:55.596535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.966 [2024-07-15 19:46:55.606409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:29.966 [2024-07-15 19:46:55.606491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.966 [2024-07-15 19:46:55.606511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9b20 with addr=10.0.0.2, port=4420 00:18:29.966 [2024-07-15 19:46:55.606522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9b20 is same with the state(5) to be set 00:18:29.966 [2024-07-15 19:46:55.606537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9b20 (9): Bad file descriptor 00:18:29.966 [2024-07-15 19:46:55.606552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:29.966 [2024-07-15 19:46:55.606561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:29.966 [2024-07-15 19:46:55.606570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:29.966 [2024-07-15 19:46:55.606584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.966 [2024-07-15 19:46:55.611357] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:29.966 [2024-07-15 19:46:55.611391] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.966 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:30.225 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.226 19:46:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 [2024-07-15 19:46:56.982581] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:31.639 [2024-07-15 19:46:56.982622] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:31.639 [2024-07-15 19:46:56.982659] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:31.639 [2024-07-15 19:46:57.068702] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:31.639 [2024-07-15 19:46:57.129428] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:31.639 [2024-07-15 19:46:57.129551] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 2024/07/15 19:46:57 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:31.639 request: 00:18:31.639 { 00:18:31.639 "method": "bdev_nvme_start_discovery", 00:18:31.639 "params": { 00:18:31.639 "name": "nvme", 00:18:31.639 "trtype": "tcp", 00:18:31.639 "traddr": "10.0.0.2", 00:18:31.639 "adrfam": "ipv4", 00:18:31.639 "trsvcid": "8009", 00:18:31.639 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:31.639 "wait_for_attach": true 00:18:31.639 } 00:18:31.639 } 00:18:31.639 Got JSON-RPC error response 00:18:31.639 GoRPCClient: error on JSON-RPC call 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 2024/07/15 19:46:57 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:31.639 request: 00:18:31.639 { 00:18:31.639 "method": "bdev_nvme_start_discovery", 00:18:31.639 "params": { 00:18:31.639 "name": "nvme_second", 00:18:31.639 "trtype": "tcp", 00:18:31.639 "traddr": "10.0.0.2", 00:18:31.639 "adrfam": "ipv4", 00:18:31.639 "trsvcid": "8009", 00:18:31.639 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:31.639 "wait_for_attach": true 00:18:31.639 } 00:18:31.639 } 00:18:31.639 Got JSON-RPC error response 00:18:31.639 GoRPCClient: error on JSON-RPC call 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.639 19:46:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.013 [2024-07-15 19:46:58.410130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.013 [2024-07-15 19:46:58.410225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f380 with addr=10.0.0.2, port=8010 00:18:33.013 [2024-07-15 19:46:58.410262] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:33.013 [2024-07-15 19:46:58.410272] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:33.013 [2024-07-15 19:46:58.410283] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:33.948 [2024-07-15 19:46:59.410089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.948 [2024-07-15 19:46:59.410185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4f380 with addr=10.0.0.2, port=8010 00:18:33.948 [2024-07-15 19:46:59.410227] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:33.948 [2024-07-15 19:46:59.410236] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:33.948 [2024-07-15 19:46:59.410246] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:34.881 [2024-07-15 19:47:00.409957] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:34.881 2024/07/15 19:47:00 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:34.881 request: 00:18:34.881 { 00:18:34.881 "method": "bdev_nvme_start_discovery", 00:18:34.881 "params": { 00:18:34.881 "name": "nvme_second", 00:18:34.881 "trtype": "tcp", 00:18:34.881 "traddr": "10.0.0.2", 00:18:34.881 "adrfam": "ipv4", 00:18:34.881 "trsvcid": "8010", 00:18:34.881 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:34.881 "wait_for_attach": false, 00:18:34.881 "attach_timeout_ms": 3000 00:18:34.881 } 00:18:34.881 } 00:18:34.881 Got JSON-RPC error response 00:18:34.881 GoRPCClient: error on JSON-RPC call 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88939 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.881 rmmod nvme_tcp 00:18:34.881 rmmod nvme_fabrics 00:18:34.881 rmmod nvme_keyring 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88889 ']' 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88889 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88889 ']' 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88889 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88889 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:34.881 killing process with pid 88889 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88889' 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88889 00:18:34.881 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88889 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:35.139 00:18:35.139 real 0m11.154s 00:18:35.139 user 0m22.047s 00:18:35.139 sys 0m1.672s 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.139 ************************************ 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.139 END TEST nvmf_host_discovery 00:18:35.139 ************************************ 00:18:35.139 19:47:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:35.139 19:47:00 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:35.139 19:47:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:35.139 19:47:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.139 19:47:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.139 ************************************ 00:18:35.139 START TEST nvmf_host_multipath_status 00:18:35.139 ************************************ 00:18:35.139 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:35.398 * Looking for test storage... 00:18:35.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.398 19:47:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:35.398 Cannot find device "nvmf_tgt_br" 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.398 Cannot find device "nvmf_tgt_br2" 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:35.398 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:35.399 Cannot find device "nvmf_tgt_br" 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:35.399 Cannot find device "nvmf_tgt_br2" 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.399 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:35.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:18:35.658 00:18:35.658 --- 10.0.0.2 ping statistics --- 00:18:35.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.658 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:35.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:18:35.658 00:18:35.658 --- 10.0.0.3 ping statistics --- 00:18:35.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.658 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:35.658 00:18:35.658 --- 10.0.0.1 ping statistics --- 00:18:35.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.658 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89422 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89422 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89422 ']' 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.658 19:47:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:35.658 [2024-07-15 19:47:01.426533] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:18:35.658 [2024-07-15 19:47:01.426648] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.916 [2024-07-15 19:47:01.557733] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:35.916 [2024-07-15 19:47:01.664491] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.916 [2024-07-15 19:47:01.664548] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.916 [2024-07-15 19:47:01.664557] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.916 [2024-07-15 19:47:01.664564] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.916 [2024-07-15 19:47:01.664571] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.916 [2024-07-15 19:47:01.664723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.916 [2024-07-15 19:47:01.664921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89422 00:18:36.848 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:37.105 [2024-07-15 19:47:02.723019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.105 19:47:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:37.362 Malloc0 00:18:37.362 19:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:37.619 19:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.875 19:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.132 [2024-07-15 19:47:03.756862] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.132 19:47:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:38.389 [2024-07-15 19:47:03.976946] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89526 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89526 /var/tmp/bdevperf.sock 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89526 ']' 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.389 19:47:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:39.322 19:47:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.322 19:47:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:39.322 19:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:39.580 19:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:40.146 Nvme0n1 00:18:40.146 19:47:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:40.404 Nvme0n1 00:18:40.404 19:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:40.404 19:47:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:42.306 19:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:42.306 19:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:42.872 19:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:42.872 19:47:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.249 19:47:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:44.508 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:44.508 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:44.508 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:44.508 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.766 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.766 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:44.767 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.767 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:45.026 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.026 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:45.026 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:45.026 19:47:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:45.593 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.593 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:45.593 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:45.593 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:45.593 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.593 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:45.593 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:45.851 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:46.110 19:47:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:47.487 19:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:47.487 19:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:47.487 19:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.487 19:47:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:47.487 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:47.487 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:47.487 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.487 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:47.745 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.745 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:47.745 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.745 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:48.018 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.018 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:48.018 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.018 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:48.289 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.289 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:48.289 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.289 19:47:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:48.547 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.547 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:48.547 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.547 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:48.806 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.806 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:48.806 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:49.064 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:49.323 19:47:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:50.258 19:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:50.258 19:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:50.258 19:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.258 19:47:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:50.516 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.516 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:50.516 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.517 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:50.776 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:50.776 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:50.776 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.776 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:51.034 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.034 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:51.034 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:51.034 19:47:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.293 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.293 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:51.293 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.293 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:51.551 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.551 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:51.551 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.551 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:51.810 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.810 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:51.810 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:52.069 19:47:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:52.328 19:47:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.705 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:53.963 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:53.963 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:53.963 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:53.963 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.221 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.221 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:54.221 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.221 19:47:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:54.547 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.547 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:54.547 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:54.547 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.806 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.806 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:54.806 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.806 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:55.064 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:55.064 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:55.064 19:47:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:55.323 19:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:55.581 19:47:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.956 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:57.214 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:57.214 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:57.214 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.214 19:47:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:57.472 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.472 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:57.472 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.472 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:57.730 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.730 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:57.730 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.730 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:57.989 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:57.989 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:57.989 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.989 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:58.246 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:58.246 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:58.246 19:47:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:58.505 19:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:58.763 19:47:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:59.702 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:59.702 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:59.702 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.702 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:59.961 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:59.961 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:59.961 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.961 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:00.219 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.219 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:00.219 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.219 19:47:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:00.477 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.477 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:00.477 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.477 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:00.736 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.736 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:00.736 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.736 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:00.994 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:00.994 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:00.994 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:00.994 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.251 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.251 19:47:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:01.509 19:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:01.509 19:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:01.767 19:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:01.767 19:47:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.140 19:47:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:03.396 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.397 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:03.397 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.397 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:03.960 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.960 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:03.960 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.960 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.217 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.217 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:04.217 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.217 19:47:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.475 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.475 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:04.475 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.475 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:04.732 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.732 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:04.732 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:05.045 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:05.303 19:47:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:06.235 19:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:06.235 19:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:06.235 19:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.235 19:47:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.493 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:06.493 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:06.493 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.493 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:06.751 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.751 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:06.751 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.009 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.267 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.267 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.267 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.267 19:47:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.526 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.526 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:07.526 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.526 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:07.784 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.784 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:07.784 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.784 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.042 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.042 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:08.042 19:47:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:08.300 19:47:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:08.867 19:47:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:09.808 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:09.808 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:09.808 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.808 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:10.065 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.065 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:10.065 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:10.065 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.322 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.322 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.322 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.322 19:47:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:10.579 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.579 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:10.579 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.579 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.837 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.837 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:10.837 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.837 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.095 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.095 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:11.095 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.095 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:11.352 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.352 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:11.352 19:47:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:11.609 19:47:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:11.866 19:47:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:12.799 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:12.799 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:12.799 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.799 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:13.057 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.057 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:13.057 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.057 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:13.330 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.330 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:13.330 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.330 19:47:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:13.588 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.588 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:13.588 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.588 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:13.845 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.845 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:13.846 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.846 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:14.103 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.103 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:14.103 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.103 19:47:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89526 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89526 ']' 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89526 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89526 00:19:14.360 killing process with pid 89526 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89526' 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89526 00:19:14.360 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89526 00:19:14.631 Connection closed with partial response: 00:19:14.631 00:19:14.631 00:19:14.631 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89526 00:19:14.631 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.631 [2024-07-15 19:47:04.044258] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:19:14.631 [2024-07-15 19:47:04.044459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89526 ] 00:19:14.631 [2024-07-15 19:47:04.178765] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.631 [2024-07-15 19:47:04.288478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.631 Running I/O for 90 seconds... 00:19:14.631 [2024-07-15 19:47:21.022980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-07-15 19:47:21.023065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.023101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.023118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.023141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.023168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.023193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.023208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.023229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.023243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.023263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.023278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.023298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.023313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.023334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.023348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.025953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.025995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.026010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.026031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-07-15 19:47:21.026045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.026067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-07-15 19:47:21.026081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.026102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-07-15 19:47:21.026117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.631 [2024-07-15 19:47:21.026138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-07-15 19:47:21.026153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.026187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-07-15 19:47:21.026203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.026224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-07-15 19:47:21.026238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.026259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-07-15 19:47:21.026274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.026295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-07-15 19:47:21.026309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.026330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-07-15 19:47:21.026344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-07-15 19:47:21.027028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.027974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.027989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-07-15 19:47:21.028370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.632 [2024-07-15 19:47:21.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.028983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.028997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-07-15 19:47:21.029107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.029328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.029343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-07-15 19:47:21.030701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.633 [2024-07-15 19:47:21.030722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.030764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.030799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.030834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.030870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.030905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.030940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.030975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.030990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.031979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.031999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-07-15 19:47:21.032288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.634 [2024-07-15 19:47:21.032309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.032323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.032344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.032358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.032379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.032393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.032413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.032428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.032448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.032470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.032492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.032506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.032528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.032542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.033365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-07-15 19:47:21.033407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.033978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.033998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.635 [2024-07-15 19:47:21.034387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-07-15 19:47:21.034401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.034868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.034882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.048733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.048773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.048808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.048844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.048879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.048929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.048969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.048989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-07-15 19:47:21.049311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.049484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.049498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.636 [2024-07-15 19:47:21.050763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-07-15 19:47:21.050784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.050813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.050833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.050882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.050905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.050935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.050955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.050985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.051962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.051982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.637 [2024-07-15 19:47:21.052955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-07-15 19:47:21.052975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.053449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.053499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.053549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.053598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.053648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.053699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.053729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.053750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.054888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.054925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.054962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.054984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-07-15 19:47:21.055036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.055964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.055984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.056013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.056033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.056063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.056083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.056113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.056133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.056175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.056198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.056228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-07-15 19:47:21.056249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.638 [2024-07-15 19:47:21.056278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.056956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.056985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-07-15 19:47:21.057912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.057964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.057993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.058013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.058043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.058063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.639 [2024-07-15 19:47:21.058093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-07-15 19:47:21.058113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.059956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.059986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.060973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.060993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.061022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.061042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.061072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.061092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.061122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.061142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.061185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.061207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.061236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.061257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.061287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.061308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.640 [2024-07-15 19:47:21.061337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-07-15 19:47:21.061368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.061966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.061995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.062048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.062098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.062159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.062207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.062242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.062276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.062311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.062346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.062381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.062416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.062437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.062451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.063271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.063329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.063365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-07-15 19:47:21.063400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.641 [2024-07-15 19:47:21.063917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.641 [2024-07-15 19:47:21.063932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.063952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.063967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.063987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.064967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.064981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.642 [2024-07-15 19:47:21.065391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.642 [2024-07-15 19:47:21.065426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.642 [2024-07-15 19:47:21.065447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.065461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.065483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.065497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.066975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.066996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.643 [2024-07-15 19:47:21.067495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.643 [2024-07-15 19:47:21.067516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.067976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.067991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.068444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.068487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.068523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.068559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.068580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.068594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.069390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.069434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.069469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.069503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.644 [2024-07-15 19:47:21.069538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.069573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.069608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.069642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.069687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.069759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.644 [2024-07-15 19:47:21.069794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.644 [2024-07-15 19:47:21.069814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.069828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.069849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.069863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.069893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.069910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.069931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.069945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.069966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.069980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.070973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.070987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.645 [2024-07-15 19:47:21.071333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.645 [2024-07-15 19:47:21.071354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.071369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.071389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.071403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.071424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.071438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.071458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.071482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.071505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.071520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.071540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.646 [2024-07-15 19:47:21.071555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.071575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.071590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.071611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.071625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.072959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.072985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.073000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.073020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.073035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.073055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.073069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.073097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.081828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.081881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.081915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.081939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.081954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.081975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.081989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.646 [2024-07-15 19:47:21.082376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.646 [2024-07-15 19:47:21.082398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.082984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.082998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.647 [2024-07-15 19:47:21.083418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.083453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.083488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.083524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.083859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.083927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.083968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.083994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.084008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.084047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.084063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.647 [2024-07-15 19:47:21.084089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.647 [2024-07-15 19:47:21.084104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.084960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.084974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.648 [2024-07-15 19:47:21.085843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.648 [2024-07-15 19:47:21.085868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.085883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.085923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.085937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.085963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.085977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:21.086485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:21.086755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:21.086782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.388769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.388836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.388908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.388927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.388949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.388990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.389270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.389289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.389302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.392689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.392724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.392758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.392792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.392985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.649 [2024-07-15 19:47:37.392999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.394242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.394270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.394295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.649 [2024-07-15 19:47:37.394311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.649 [2024-07-15 19:47:37.394344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.650 [2024-07-15 19:47:37.394691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.650 [2024-07-15 19:47:37.394705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.650 Received shutdown signal, test time was about 33.960532 seconds 00:19:14.650 00:19:14.650 Latency(us) 00:19:14.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.650 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.650 Verification LBA range: start 0x0 length 0x4000 00:19:14.650 Nvme0n1 : 33.96 8733.99 34.12 0.00 0.00 14628.47 121.95 4087539.90 00:19:14.650 =================================================================================================================== 00:19:14.650 Total : 8733.99 34.12 0.00 0.00 14628.47 121.95 4087539.90 00:19:14.650 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.907 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:15.165 rmmod nvme_tcp 00:19:15.165 rmmod nvme_fabrics 00:19:15.165 rmmod nvme_keyring 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89422 ']' 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89422 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89422 ']' 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89422 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89422 00:19:15.165 killing process with pid 89422 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89422' 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89422 00:19:15.165 19:47:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89422 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:15.422 00:19:15.422 real 0m40.159s 00:19:15.422 user 2m11.386s 00:19:15.422 sys 0m9.885s 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:15.422 19:47:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:15.422 ************************************ 00:19:15.422 END TEST nvmf_host_multipath_status 00:19:15.422 ************************************ 00:19:15.422 19:47:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:15.422 19:47:41 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:15.422 19:47:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:15.423 19:47:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:15.423 19:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:15.423 ************************************ 00:19:15.423 START TEST nvmf_discovery_remove_ifc 00:19:15.423 ************************************ 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:15.423 * Looking for test storage... 00:19:15.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.423 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:15.681 Cannot find device "nvmf_tgt_br" 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.681 Cannot find device "nvmf_tgt_br2" 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:15.681 Cannot find device "nvmf_tgt_br" 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:15.681 Cannot find device "nvmf_tgt_br2" 00:19:15.681 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:15.682 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:15.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:15.940 00:19:15.940 --- 10.0.0.2 ping statistics --- 00:19:15.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.940 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:15.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:15.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:15.940 00:19:15.940 --- 10.0.0.3 ping statistics --- 00:19:15.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.940 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:15.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:19:15.940 00:19:15.940 --- 10.0.0.1 ping statistics --- 00:19:15.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.940 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90831 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90831 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90831 ']' 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.940 19:47:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:15.940 [2024-07-15 19:47:41.641488] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:19:15.940 [2024-07-15 19:47:41.641588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.198 [2024-07-15 19:47:41.775036] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.198 [2024-07-15 19:47:41.857132] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.198 [2024-07-15 19:47:41.857206] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.198 [2024-07-15 19:47:41.857218] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.198 [2024-07-15 19:47:41.857226] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.198 [2024-07-15 19:47:41.857234] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.198 [2024-07-15 19:47:41.857259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.130 [2024-07-15 19:47:42.715049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.130 [2024-07-15 19:47:42.723152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:17.130 null0 00:19:17.130 [2024-07-15 19:47:42.755124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90887 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90887 /tmp/host.sock 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90887 ']' 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.130 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.130 19:47:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.130 [2024-07-15 19:47:42.839556] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:19:17.130 [2024-07-15 19:47:42.839667] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90887 ] 00:19:17.387 [2024-07-15 19:47:42.980974] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.387 [2024-07-15 19:47:43.106031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.317 19:47:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:19.257 [2024-07-15 19:47:44.978142] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:19.257 [2024-07-15 19:47:44.978182] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:19.257 [2024-07-15 19:47:44.978200] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:19.529 [2024-07-15 19:47:45.065343] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:19.529 [2024-07-15 19:47:45.122360] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:19.529 [2024-07-15 19:47:45.122441] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:19.529 [2024-07-15 19:47:45.122469] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:19.529 [2024-07-15 19:47:45.122485] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:19.529 [2024-07-15 19:47:45.122510] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:19.529 [2024-07-15 19:47:45.127176] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1dbc8c0 was disconnected and freed. delete nvme_qpair. 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:19.529 19:47:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:20.903 19:47:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:21.837 19:47:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:22.771 19:47:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:23.703 19:47:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:25.074 19:47:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:25.074 [2024-07-15 19:47:50.550321] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:25.074 [2024-07-15 19:47:50.550404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.074 [2024-07-15 19:47:50.550422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.074 [2024-07-15 19:47:50.550436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.074 [2024-07-15 19:47:50.550445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.074 [2024-07-15 19:47:50.550455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.074 [2024-07-15 19:47:50.550464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.074 [2024-07-15 19:47:50.550474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.074 [2024-07-15 19:47:50.550498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.074 [2024-07-15 19:47:50.550523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.074 [2024-07-15 19:47:50.550532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.074 [2024-07-15 19:47:50.550540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85aa0 is same with the state(5) to be set 00:19:25.074 [2024-07-15 19:47:50.560315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85aa0 (9): Bad file descriptor 00:19:25.074 [2024-07-15 19:47:50.570343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.006 [2024-07-15 19:47:51.630236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:26.006 [2024-07-15 19:47:51.630366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85aa0 with addr=10.0.0.2, port=4420 00:19:26.006 [2024-07-15 19:47:51.630399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85aa0 is same with the state(5) to be set 00:19:26.006 [2024-07-15 19:47:51.630462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85aa0 (9): Bad file descriptor 00:19:26.006 [2024-07-15 19:47:51.631312] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:26.006 [2024-07-15 19:47:51.631369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:26.006 [2024-07-15 19:47:51.631389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:26.006 [2024-07-15 19:47:51.631408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:26.006 [2024-07-15 19:47:51.631446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.006 [2024-07-15 19:47:51.631466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:26.006 19:47:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:26.959 [2024-07-15 19:47:52.631526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:26.959 [2024-07-15 19:47:52.631609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:26.959 [2024-07-15 19:47:52.631638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:26.959 [2024-07-15 19:47:52.631648] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:26.959 [2024-07-15 19:47:52.631673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.959 [2024-07-15 19:47:52.631704] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:26.959 [2024-07-15 19:47:52.631802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.959 [2024-07-15 19:47:52.631816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.959 [2024-07-15 19:47:52.631829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.959 [2024-07-15 19:47:52.631838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.959 [2024-07-15 19:47:52.631848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.959 [2024-07-15 19:47:52.631856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.959 [2024-07-15 19:47:52.631865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.959 [2024-07-15 19:47:52.631873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.959 [2024-07-15 19:47:52.631883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.959 [2024-07-15 19:47:52.631891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.959 [2024-07-15 19:47:52.631900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:26.959 [2024-07-15 19:47:52.631938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d28540 (9): Bad file descriptor 00:19:26.959 [2024-07-15 19:47:52.632930] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:26.959 [2024-07-15 19:47:52.632947] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:26.959 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:27.217 19:47:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:28.148 19:47:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:29.081 [2024-07-15 19:47:54.639839] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:29.081 [2024-07-15 19:47:54.639878] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:29.081 [2024-07-15 19:47:54.639895] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:29.081 [2024-07-15 19:47:54.725950] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:29.081 [2024-07-15 19:47:54.782126] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:29.081 [2024-07-15 19:47:54.782225] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:29.081 [2024-07-15 19:47:54.782267] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:29.081 [2024-07-15 19:47:54.782284] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:29.081 [2024-07-15 19:47:54.782293] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:29.081 [2024-07-15 19:47:54.788514] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d9a860 was disconnected and freed. delete nvme_qpair. 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:29.339 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90887 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90887 ']' 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90887 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90887 00:19:29.340 killing process with pid 90887 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90887' 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90887 00:19:29.340 19:47:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90887 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.597 rmmod nvme_tcp 00:19:29.597 rmmod nvme_fabrics 00:19:29.597 rmmod nvme_keyring 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90831 ']' 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90831 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90831 ']' 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90831 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90831 00:19:29.597 killing process with pid 90831 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90831' 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90831 00:19:29.597 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90831 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:29.855 00:19:29.855 real 0m14.443s 00:19:29.855 user 0m25.964s 00:19:29.855 sys 0m1.638s 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.855 ************************************ 00:19:29.855 END TEST nvmf_discovery_remove_ifc 00:19:29.855 19:47:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.855 ************************************ 00:19:29.855 19:47:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:29.855 19:47:55 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:29.855 19:47:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:29.855 19:47:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.855 19:47:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:29.855 ************************************ 00:19:29.855 START TEST nvmf_identify_kernel_target 00:19:29.855 ************************************ 00:19:29.855 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:30.113 * Looking for test storage... 00:19:30.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:30.114 Cannot find device "nvmf_tgt_br" 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.114 Cannot find device "nvmf_tgt_br2" 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:30.114 Cannot find device "nvmf_tgt_br" 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:30.114 Cannot find device "nvmf_tgt_br2" 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.114 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:30.372 19:47:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:30.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:19:30.372 00:19:30.372 --- 10.0.0.2 ping statistics --- 00:19:30.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.372 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:30.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:30.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:30.372 00:19:30.372 --- 10.0.0.3 ping statistics --- 00:19:30.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.372 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:30.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:19:30.372 00:19:30.372 --- 10.0.0.1 ping statistics --- 00:19:30.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.372 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:30.372 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:30.373 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:30.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:30.938 Waiting for block devices as requested 00:19:30.938 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:30.938 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:30.938 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:31.196 No valid GPT data, bailing 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:31.196 No valid GPT data, bailing 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:31.196 No valid GPT data, bailing 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:31.196 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:31.196 No valid GPT data, bailing 00:19:31.455 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:31.455 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:31.455 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:31.455 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:31.455 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:31.455 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:31.455 19:47:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -a 10.0.0.1 -t tcp -s 4420 00:19:31.455 00:19:31.455 Discovery Log Number of Records 2, Generation counter 2 00:19:31.455 =====Discovery Log Entry 0====== 00:19:31.455 trtype: tcp 00:19:31.455 adrfam: ipv4 00:19:31.455 subtype: current discovery subsystem 00:19:31.455 treq: not specified, sq flow control disable supported 00:19:31.455 portid: 1 00:19:31.455 trsvcid: 4420 00:19:31.455 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:31.455 traddr: 10.0.0.1 00:19:31.455 eflags: none 00:19:31.455 sectype: none 00:19:31.455 =====Discovery Log Entry 1====== 00:19:31.455 trtype: tcp 00:19:31.455 adrfam: ipv4 00:19:31.455 subtype: nvme subsystem 00:19:31.455 treq: not specified, sq flow control disable supported 00:19:31.455 portid: 1 00:19:31.455 trsvcid: 4420 00:19:31.455 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:31.455 traddr: 10.0.0.1 00:19:31.455 eflags: none 00:19:31.455 sectype: none 00:19:31.455 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:31.455 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:31.455 ===================================================== 00:19:31.455 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:31.455 ===================================================== 00:19:31.455 Controller Capabilities/Features 00:19:31.455 ================================ 00:19:31.455 Vendor ID: 0000 00:19:31.455 Subsystem Vendor ID: 0000 00:19:31.455 Serial Number: 9452526397332b217bae 00:19:31.455 Model Number: Linux 00:19:31.455 Firmware Version: 6.7.0-68 00:19:31.455 Recommended Arb Burst: 0 00:19:31.455 IEEE OUI Identifier: 00 00 00 00:19:31.455 Multi-path I/O 00:19:31.455 May have multiple subsystem ports: No 00:19:31.455 May have multiple controllers: No 00:19:31.455 Associated with SR-IOV VF: No 00:19:31.455 Max Data Transfer Size: Unlimited 00:19:31.455 Max Number of Namespaces: 0 00:19:31.455 Max Number of I/O Queues: 1024 00:19:31.455 NVMe Specification Version (VS): 1.3 00:19:31.455 NVMe Specification Version (Identify): 1.3 00:19:31.455 Maximum Queue Entries: 1024 00:19:31.455 Contiguous Queues Required: No 00:19:31.455 Arbitration Mechanisms Supported 00:19:31.455 Weighted Round Robin: Not Supported 00:19:31.455 Vendor Specific: Not Supported 00:19:31.455 Reset Timeout: 7500 ms 00:19:31.455 Doorbell Stride: 4 bytes 00:19:31.455 NVM Subsystem Reset: Not Supported 00:19:31.455 Command Sets Supported 00:19:31.455 NVM Command Set: Supported 00:19:31.455 Boot Partition: Not Supported 00:19:31.455 Memory Page Size Minimum: 4096 bytes 00:19:31.455 Memory Page Size Maximum: 4096 bytes 00:19:31.455 Persistent Memory Region: Not Supported 00:19:31.455 Optional Asynchronous Events Supported 00:19:31.455 Namespace Attribute Notices: Not Supported 00:19:31.455 Firmware Activation Notices: Not Supported 00:19:31.455 ANA Change Notices: Not Supported 00:19:31.455 PLE Aggregate Log Change Notices: Not Supported 00:19:31.455 LBA Status Info Alert Notices: Not Supported 00:19:31.455 EGE Aggregate Log Change Notices: Not Supported 00:19:31.455 Normal NVM Subsystem Shutdown event: Not Supported 00:19:31.455 Zone Descriptor Change Notices: Not Supported 00:19:31.455 Discovery Log Change Notices: Supported 00:19:31.455 Controller Attributes 00:19:31.455 128-bit Host Identifier: Not Supported 00:19:31.455 Non-Operational Permissive Mode: Not Supported 00:19:31.455 NVM Sets: Not Supported 00:19:31.455 Read Recovery Levels: Not Supported 00:19:31.455 Endurance Groups: Not Supported 00:19:31.455 Predictable Latency Mode: Not Supported 00:19:31.455 Traffic Based Keep ALive: Not Supported 00:19:31.456 Namespace Granularity: Not Supported 00:19:31.456 SQ Associations: Not Supported 00:19:31.456 UUID List: Not Supported 00:19:31.456 Multi-Domain Subsystem: Not Supported 00:19:31.456 Fixed Capacity Management: Not Supported 00:19:31.456 Variable Capacity Management: Not Supported 00:19:31.456 Delete Endurance Group: Not Supported 00:19:31.456 Delete NVM Set: Not Supported 00:19:31.456 Extended LBA Formats Supported: Not Supported 00:19:31.456 Flexible Data Placement Supported: Not Supported 00:19:31.456 00:19:31.456 Controller Memory Buffer Support 00:19:31.456 ================================ 00:19:31.456 Supported: No 00:19:31.456 00:19:31.456 Persistent Memory Region Support 00:19:31.456 ================================ 00:19:31.456 Supported: No 00:19:31.456 00:19:31.456 Admin Command Set Attributes 00:19:31.456 ============================ 00:19:31.456 Security Send/Receive: Not Supported 00:19:31.456 Format NVM: Not Supported 00:19:31.456 Firmware Activate/Download: Not Supported 00:19:31.456 Namespace Management: Not Supported 00:19:31.456 Device Self-Test: Not Supported 00:19:31.456 Directives: Not Supported 00:19:31.456 NVMe-MI: Not Supported 00:19:31.456 Virtualization Management: Not Supported 00:19:31.456 Doorbell Buffer Config: Not Supported 00:19:31.456 Get LBA Status Capability: Not Supported 00:19:31.456 Command & Feature Lockdown Capability: Not Supported 00:19:31.456 Abort Command Limit: 1 00:19:31.456 Async Event Request Limit: 1 00:19:31.456 Number of Firmware Slots: N/A 00:19:31.456 Firmware Slot 1 Read-Only: N/A 00:19:31.456 Firmware Activation Without Reset: N/A 00:19:31.456 Multiple Update Detection Support: N/A 00:19:31.456 Firmware Update Granularity: No Information Provided 00:19:31.456 Per-Namespace SMART Log: No 00:19:31.456 Asymmetric Namespace Access Log Page: Not Supported 00:19:31.456 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:31.456 Command Effects Log Page: Not Supported 00:19:31.456 Get Log Page Extended Data: Supported 00:19:31.456 Telemetry Log Pages: Not Supported 00:19:31.456 Persistent Event Log Pages: Not Supported 00:19:31.456 Supported Log Pages Log Page: May Support 00:19:31.456 Commands Supported & Effects Log Page: Not Supported 00:19:31.456 Feature Identifiers & Effects Log Page:May Support 00:19:31.456 NVMe-MI Commands & Effects Log Page: May Support 00:19:31.456 Data Area 4 for Telemetry Log: Not Supported 00:19:31.456 Error Log Page Entries Supported: 1 00:19:31.456 Keep Alive: Not Supported 00:19:31.456 00:19:31.456 NVM Command Set Attributes 00:19:31.456 ========================== 00:19:31.456 Submission Queue Entry Size 00:19:31.456 Max: 1 00:19:31.456 Min: 1 00:19:31.456 Completion Queue Entry Size 00:19:31.456 Max: 1 00:19:31.456 Min: 1 00:19:31.456 Number of Namespaces: 0 00:19:31.456 Compare Command: Not Supported 00:19:31.456 Write Uncorrectable Command: Not Supported 00:19:31.456 Dataset Management Command: Not Supported 00:19:31.456 Write Zeroes Command: Not Supported 00:19:31.456 Set Features Save Field: Not Supported 00:19:31.456 Reservations: Not Supported 00:19:31.456 Timestamp: Not Supported 00:19:31.456 Copy: Not Supported 00:19:31.456 Volatile Write Cache: Not Present 00:19:31.456 Atomic Write Unit (Normal): 1 00:19:31.456 Atomic Write Unit (PFail): 1 00:19:31.456 Atomic Compare & Write Unit: 1 00:19:31.456 Fused Compare & Write: Not Supported 00:19:31.456 Scatter-Gather List 00:19:31.456 SGL Command Set: Supported 00:19:31.456 SGL Keyed: Not Supported 00:19:31.456 SGL Bit Bucket Descriptor: Not Supported 00:19:31.456 SGL Metadata Pointer: Not Supported 00:19:31.456 Oversized SGL: Not Supported 00:19:31.456 SGL Metadata Address: Not Supported 00:19:31.456 SGL Offset: Supported 00:19:31.456 Transport SGL Data Block: Not Supported 00:19:31.456 Replay Protected Memory Block: Not Supported 00:19:31.456 00:19:31.456 Firmware Slot Information 00:19:31.456 ========================= 00:19:31.456 Active slot: 0 00:19:31.456 00:19:31.456 00:19:31.456 Error Log 00:19:31.456 ========= 00:19:31.456 00:19:31.456 Active Namespaces 00:19:31.456 ================= 00:19:31.456 Discovery Log Page 00:19:31.456 ================== 00:19:31.456 Generation Counter: 2 00:19:31.456 Number of Records: 2 00:19:31.456 Record Format: 0 00:19:31.456 00:19:31.456 Discovery Log Entry 0 00:19:31.456 ---------------------- 00:19:31.456 Transport Type: 3 (TCP) 00:19:31.456 Address Family: 1 (IPv4) 00:19:31.456 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:31.456 Entry Flags: 00:19:31.456 Duplicate Returned Information: 0 00:19:31.456 Explicit Persistent Connection Support for Discovery: 0 00:19:31.456 Transport Requirements: 00:19:31.456 Secure Channel: Not Specified 00:19:31.456 Port ID: 1 (0x0001) 00:19:31.456 Controller ID: 65535 (0xffff) 00:19:31.456 Admin Max SQ Size: 32 00:19:31.456 Transport Service Identifier: 4420 00:19:31.456 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:31.456 Transport Address: 10.0.0.1 00:19:31.456 Discovery Log Entry 1 00:19:31.456 ---------------------- 00:19:31.456 Transport Type: 3 (TCP) 00:19:31.456 Address Family: 1 (IPv4) 00:19:31.456 Subsystem Type: 2 (NVM Subsystem) 00:19:31.456 Entry Flags: 00:19:31.456 Duplicate Returned Information: 0 00:19:31.456 Explicit Persistent Connection Support for Discovery: 0 00:19:31.456 Transport Requirements: 00:19:31.456 Secure Channel: Not Specified 00:19:31.456 Port ID: 1 (0x0001) 00:19:31.456 Controller ID: 65535 (0xffff) 00:19:31.456 Admin Max SQ Size: 32 00:19:31.456 Transport Service Identifier: 4420 00:19:31.456 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:31.456 Transport Address: 10.0.0.1 00:19:31.456 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:31.715 get_feature(0x01) failed 00:19:31.715 get_feature(0x02) failed 00:19:31.715 get_feature(0x04) failed 00:19:31.715 ===================================================== 00:19:31.715 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:31.715 ===================================================== 00:19:31.715 Controller Capabilities/Features 00:19:31.715 ================================ 00:19:31.715 Vendor ID: 0000 00:19:31.715 Subsystem Vendor ID: 0000 00:19:31.715 Serial Number: dad137c21e9342b0d390 00:19:31.715 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:31.715 Firmware Version: 6.7.0-68 00:19:31.715 Recommended Arb Burst: 6 00:19:31.715 IEEE OUI Identifier: 00 00 00 00:19:31.715 Multi-path I/O 00:19:31.715 May have multiple subsystem ports: Yes 00:19:31.715 May have multiple controllers: Yes 00:19:31.715 Associated with SR-IOV VF: No 00:19:31.715 Max Data Transfer Size: Unlimited 00:19:31.715 Max Number of Namespaces: 1024 00:19:31.715 Max Number of I/O Queues: 128 00:19:31.715 NVMe Specification Version (VS): 1.3 00:19:31.715 NVMe Specification Version (Identify): 1.3 00:19:31.715 Maximum Queue Entries: 1024 00:19:31.715 Contiguous Queues Required: No 00:19:31.715 Arbitration Mechanisms Supported 00:19:31.715 Weighted Round Robin: Not Supported 00:19:31.715 Vendor Specific: Not Supported 00:19:31.715 Reset Timeout: 7500 ms 00:19:31.715 Doorbell Stride: 4 bytes 00:19:31.715 NVM Subsystem Reset: Not Supported 00:19:31.715 Command Sets Supported 00:19:31.715 NVM Command Set: Supported 00:19:31.715 Boot Partition: Not Supported 00:19:31.715 Memory Page Size Minimum: 4096 bytes 00:19:31.715 Memory Page Size Maximum: 4096 bytes 00:19:31.715 Persistent Memory Region: Not Supported 00:19:31.715 Optional Asynchronous Events Supported 00:19:31.715 Namespace Attribute Notices: Supported 00:19:31.715 Firmware Activation Notices: Not Supported 00:19:31.715 ANA Change Notices: Supported 00:19:31.715 PLE Aggregate Log Change Notices: Not Supported 00:19:31.715 LBA Status Info Alert Notices: Not Supported 00:19:31.715 EGE Aggregate Log Change Notices: Not Supported 00:19:31.715 Normal NVM Subsystem Shutdown event: Not Supported 00:19:31.715 Zone Descriptor Change Notices: Not Supported 00:19:31.715 Discovery Log Change Notices: Not Supported 00:19:31.715 Controller Attributes 00:19:31.715 128-bit Host Identifier: Supported 00:19:31.715 Non-Operational Permissive Mode: Not Supported 00:19:31.715 NVM Sets: Not Supported 00:19:31.715 Read Recovery Levels: Not Supported 00:19:31.715 Endurance Groups: Not Supported 00:19:31.715 Predictable Latency Mode: Not Supported 00:19:31.715 Traffic Based Keep ALive: Supported 00:19:31.715 Namespace Granularity: Not Supported 00:19:31.715 SQ Associations: Not Supported 00:19:31.715 UUID List: Not Supported 00:19:31.715 Multi-Domain Subsystem: Not Supported 00:19:31.715 Fixed Capacity Management: Not Supported 00:19:31.715 Variable Capacity Management: Not Supported 00:19:31.715 Delete Endurance Group: Not Supported 00:19:31.715 Delete NVM Set: Not Supported 00:19:31.715 Extended LBA Formats Supported: Not Supported 00:19:31.715 Flexible Data Placement Supported: Not Supported 00:19:31.715 00:19:31.715 Controller Memory Buffer Support 00:19:31.715 ================================ 00:19:31.715 Supported: No 00:19:31.715 00:19:31.715 Persistent Memory Region Support 00:19:31.715 ================================ 00:19:31.715 Supported: No 00:19:31.715 00:19:31.715 Admin Command Set Attributes 00:19:31.715 ============================ 00:19:31.715 Security Send/Receive: Not Supported 00:19:31.715 Format NVM: Not Supported 00:19:31.715 Firmware Activate/Download: Not Supported 00:19:31.715 Namespace Management: Not Supported 00:19:31.715 Device Self-Test: Not Supported 00:19:31.715 Directives: Not Supported 00:19:31.715 NVMe-MI: Not Supported 00:19:31.715 Virtualization Management: Not Supported 00:19:31.715 Doorbell Buffer Config: Not Supported 00:19:31.715 Get LBA Status Capability: Not Supported 00:19:31.715 Command & Feature Lockdown Capability: Not Supported 00:19:31.715 Abort Command Limit: 4 00:19:31.715 Async Event Request Limit: 4 00:19:31.715 Number of Firmware Slots: N/A 00:19:31.715 Firmware Slot 1 Read-Only: N/A 00:19:31.715 Firmware Activation Without Reset: N/A 00:19:31.715 Multiple Update Detection Support: N/A 00:19:31.715 Firmware Update Granularity: No Information Provided 00:19:31.715 Per-Namespace SMART Log: Yes 00:19:31.715 Asymmetric Namespace Access Log Page: Supported 00:19:31.715 ANA Transition Time : 10 sec 00:19:31.715 00:19:31.715 Asymmetric Namespace Access Capabilities 00:19:31.715 ANA Optimized State : Supported 00:19:31.715 ANA Non-Optimized State : Supported 00:19:31.715 ANA Inaccessible State : Supported 00:19:31.715 ANA Persistent Loss State : Supported 00:19:31.715 ANA Change State : Supported 00:19:31.715 ANAGRPID is not changed : No 00:19:31.715 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:31.715 00:19:31.715 ANA Group Identifier Maximum : 128 00:19:31.715 Number of ANA Group Identifiers : 128 00:19:31.715 Max Number of Allowed Namespaces : 1024 00:19:31.715 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:31.715 Command Effects Log Page: Supported 00:19:31.715 Get Log Page Extended Data: Supported 00:19:31.715 Telemetry Log Pages: Not Supported 00:19:31.715 Persistent Event Log Pages: Not Supported 00:19:31.715 Supported Log Pages Log Page: May Support 00:19:31.715 Commands Supported & Effects Log Page: Not Supported 00:19:31.715 Feature Identifiers & Effects Log Page:May Support 00:19:31.715 NVMe-MI Commands & Effects Log Page: May Support 00:19:31.715 Data Area 4 for Telemetry Log: Not Supported 00:19:31.715 Error Log Page Entries Supported: 128 00:19:31.715 Keep Alive: Supported 00:19:31.715 Keep Alive Granularity: 1000 ms 00:19:31.715 00:19:31.715 NVM Command Set Attributes 00:19:31.715 ========================== 00:19:31.715 Submission Queue Entry Size 00:19:31.715 Max: 64 00:19:31.715 Min: 64 00:19:31.715 Completion Queue Entry Size 00:19:31.715 Max: 16 00:19:31.715 Min: 16 00:19:31.716 Number of Namespaces: 1024 00:19:31.716 Compare Command: Not Supported 00:19:31.716 Write Uncorrectable Command: Not Supported 00:19:31.716 Dataset Management Command: Supported 00:19:31.716 Write Zeroes Command: Supported 00:19:31.716 Set Features Save Field: Not Supported 00:19:31.716 Reservations: Not Supported 00:19:31.716 Timestamp: Not Supported 00:19:31.716 Copy: Not Supported 00:19:31.716 Volatile Write Cache: Present 00:19:31.716 Atomic Write Unit (Normal): 1 00:19:31.716 Atomic Write Unit (PFail): 1 00:19:31.716 Atomic Compare & Write Unit: 1 00:19:31.716 Fused Compare & Write: Not Supported 00:19:31.716 Scatter-Gather List 00:19:31.716 SGL Command Set: Supported 00:19:31.716 SGL Keyed: Not Supported 00:19:31.716 SGL Bit Bucket Descriptor: Not Supported 00:19:31.716 SGL Metadata Pointer: Not Supported 00:19:31.716 Oversized SGL: Not Supported 00:19:31.716 SGL Metadata Address: Not Supported 00:19:31.716 SGL Offset: Supported 00:19:31.716 Transport SGL Data Block: Not Supported 00:19:31.716 Replay Protected Memory Block: Not Supported 00:19:31.716 00:19:31.716 Firmware Slot Information 00:19:31.716 ========================= 00:19:31.716 Active slot: 0 00:19:31.716 00:19:31.716 Asymmetric Namespace Access 00:19:31.716 =========================== 00:19:31.716 Change Count : 0 00:19:31.716 Number of ANA Group Descriptors : 1 00:19:31.716 ANA Group Descriptor : 0 00:19:31.716 ANA Group ID : 1 00:19:31.716 Number of NSID Values : 1 00:19:31.716 Change Count : 0 00:19:31.716 ANA State : 1 00:19:31.716 Namespace Identifier : 1 00:19:31.716 00:19:31.716 Commands Supported and Effects 00:19:31.716 ============================== 00:19:31.716 Admin Commands 00:19:31.716 -------------- 00:19:31.716 Get Log Page (02h): Supported 00:19:31.716 Identify (06h): Supported 00:19:31.716 Abort (08h): Supported 00:19:31.716 Set Features (09h): Supported 00:19:31.716 Get Features (0Ah): Supported 00:19:31.716 Asynchronous Event Request (0Ch): Supported 00:19:31.716 Keep Alive (18h): Supported 00:19:31.716 I/O Commands 00:19:31.716 ------------ 00:19:31.716 Flush (00h): Supported 00:19:31.716 Write (01h): Supported LBA-Change 00:19:31.716 Read (02h): Supported 00:19:31.716 Write Zeroes (08h): Supported LBA-Change 00:19:31.716 Dataset Management (09h): Supported 00:19:31.716 00:19:31.716 Error Log 00:19:31.716 ========= 00:19:31.716 Entry: 0 00:19:31.716 Error Count: 0x3 00:19:31.716 Submission Queue Id: 0x0 00:19:31.716 Command Id: 0x5 00:19:31.716 Phase Bit: 0 00:19:31.716 Status Code: 0x2 00:19:31.716 Status Code Type: 0x0 00:19:31.716 Do Not Retry: 1 00:19:31.716 Error Location: 0x28 00:19:31.716 LBA: 0x0 00:19:31.716 Namespace: 0x0 00:19:31.716 Vendor Log Page: 0x0 00:19:31.716 ----------- 00:19:31.716 Entry: 1 00:19:31.716 Error Count: 0x2 00:19:31.716 Submission Queue Id: 0x0 00:19:31.716 Command Id: 0x5 00:19:31.716 Phase Bit: 0 00:19:31.716 Status Code: 0x2 00:19:31.716 Status Code Type: 0x0 00:19:31.716 Do Not Retry: 1 00:19:31.716 Error Location: 0x28 00:19:31.716 LBA: 0x0 00:19:31.716 Namespace: 0x0 00:19:31.716 Vendor Log Page: 0x0 00:19:31.716 ----------- 00:19:31.716 Entry: 2 00:19:31.716 Error Count: 0x1 00:19:31.716 Submission Queue Id: 0x0 00:19:31.716 Command Id: 0x4 00:19:31.716 Phase Bit: 0 00:19:31.716 Status Code: 0x2 00:19:31.716 Status Code Type: 0x0 00:19:31.716 Do Not Retry: 1 00:19:31.716 Error Location: 0x28 00:19:31.716 LBA: 0x0 00:19:31.716 Namespace: 0x0 00:19:31.716 Vendor Log Page: 0x0 00:19:31.716 00:19:31.716 Number of Queues 00:19:31.716 ================ 00:19:31.716 Number of I/O Submission Queues: 128 00:19:31.716 Number of I/O Completion Queues: 128 00:19:31.716 00:19:31.716 ZNS Specific Controller Data 00:19:31.716 ============================ 00:19:31.716 Zone Append Size Limit: 0 00:19:31.716 00:19:31.716 00:19:31.716 Active Namespaces 00:19:31.716 ================= 00:19:31.716 get_feature(0x05) failed 00:19:31.716 Namespace ID:1 00:19:31.716 Command Set Identifier: NVM (00h) 00:19:31.716 Deallocate: Supported 00:19:31.716 Deallocated/Unwritten Error: Not Supported 00:19:31.716 Deallocated Read Value: Unknown 00:19:31.716 Deallocate in Write Zeroes: Not Supported 00:19:31.716 Deallocated Guard Field: 0xFFFF 00:19:31.716 Flush: Supported 00:19:31.716 Reservation: Not Supported 00:19:31.716 Namespace Sharing Capabilities: Multiple Controllers 00:19:31.716 Size (in LBAs): 1310720 (5GiB) 00:19:31.716 Capacity (in LBAs): 1310720 (5GiB) 00:19:31.716 Utilization (in LBAs): 1310720 (5GiB) 00:19:31.716 UUID: 9254dd34-ccd1-4fbd-8b3d-d49e4dc335b8 00:19:31.716 Thin Provisioning: Not Supported 00:19:31.716 Per-NS Atomic Units: Yes 00:19:31.716 Atomic Boundary Size (Normal): 0 00:19:31.716 Atomic Boundary Size (PFail): 0 00:19:31.716 Atomic Boundary Offset: 0 00:19:31.716 NGUID/EUI64 Never Reused: No 00:19:31.716 ANA group ID: 1 00:19:31.716 Namespace Write Protected: No 00:19:31.716 Number of LBA Formats: 1 00:19:31.716 Current LBA Format: LBA Format #00 00:19:31.716 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:31.716 00:19:31.716 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:31.716 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.716 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:31.716 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.716 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:31.716 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.716 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.716 rmmod nvme_tcp 00:19:31.716 rmmod nvme_fabrics 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:31.974 19:47:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:32.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:32.796 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:32.796 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:32.796 00:19:32.796 real 0m2.871s 00:19:32.796 user 0m0.930s 00:19:32.796 sys 0m1.399s 00:19:32.796 19:47:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:32.796 19:47:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.796 ************************************ 00:19:32.796 END TEST nvmf_identify_kernel_target 00:19:32.796 ************************************ 00:19:32.796 19:47:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:32.796 19:47:58 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:32.796 19:47:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:32.796 19:47:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.796 19:47:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.796 ************************************ 00:19:32.796 START TEST nvmf_auth_host 00:19:32.796 ************************************ 00:19:32.796 19:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:33.054 * Looking for test storage... 00:19:33.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.054 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:33.055 Cannot find device "nvmf_tgt_br" 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.055 Cannot find device "nvmf_tgt_br2" 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:33.055 Cannot find device "nvmf_tgt_br" 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:33.055 Cannot find device "nvmf_tgt_br2" 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.055 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:33.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:19:33.313 00:19:33.313 --- 10.0.0.2 ping statistics --- 00:19:33.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.313 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:33.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:19:33.313 00:19:33.313 --- 10.0.0.3 ping statistics --- 00:19:33.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.313 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:33.313 00:19:33.313 --- 10.0.0.1 ping statistics --- 00:19:33.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.313 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.313 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.314 19:47:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91773 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91773 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91773 ']' 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.314 19:47:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=852ceaa8408450684464a96428009764 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1zG 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 852ceaa8408450684464a96428009764 0 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 852ceaa8408450684464a96428009764 0 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=852ceaa8408450684464a96428009764 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1zG 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1zG 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.1zG 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc0ab416c8b41cfbb220ab1f545e9d2e1921f098b199c77db7aebb449af9d7d4 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vx9 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc0ab416c8b41cfbb220ab1f545e9d2e1921f098b199c77db7aebb449af9d7d4 3 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc0ab416c8b41cfbb220ab1f545e9d2e1921f098b199c77db7aebb449af9d7d4 3 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc0ab416c8b41cfbb220ab1f545e9d2e1921f098b199c77db7aebb449af9d7d4 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vx9 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vx9 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Vx9 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bcbec70d79f89f38820487b7ceea994cf1de31114d31b18 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.IIc 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bcbec70d79f89f38820487b7ceea994cf1de31114d31b18 0 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bcbec70d79f89f38820487b7ceea994cf1de31114d31b18 0 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bcbec70d79f89f38820487b7ceea994cf1de31114d31b18 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.IIc 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.IIc 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.IIc 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de6b2ca93e50f19cac80159bc6643bbde43309fcdbbb890e 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.x87 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de6b2ca93e50f19cac80159bc6643bbde43309fcdbbb890e 2 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de6b2ca93e50f19cac80159bc6643bbde43309fcdbbb890e 2 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de6b2ca93e50f19cac80159bc6643bbde43309fcdbbb890e 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.x87 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.x87 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.x87 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af4c837aff7e5f056f9d0d2929987f24 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0EH 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af4c837aff7e5f056f9d0d2929987f24 1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af4c837aff7e5f056f9d0d2929987f24 1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af4c837aff7e5f056f9d0d2929987f24 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0EH 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0EH 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0EH 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=82ee00895bb1492d623d65daab1ca226 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rJC 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 82ee00895bb1492d623d65daab1ca226 1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 82ee00895bb1492d623d65daab1ca226 1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=82ee00895bb1492d623d65daab1ca226 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:34.724 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rJC 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rJC 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rJC 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f983cbe3eeb88538dace0dbe21bc47cd6d66e7b7774833a 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qTj 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f983cbe3eeb88538dace0dbe21bc47cd6d66e7b7774833a 2 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f983cbe3eeb88538dace0dbe21bc47cd6d66e7b7774833a 2 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f983cbe3eeb88538dace0dbe21bc47cd6d66e7b7774833a 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qTj 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qTj 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qTj 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d1c6ab8e34b893dfac5bc961c175404e 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gT2 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d1c6ab8e34b893dfac5bc961c175404e 0 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d1c6ab8e34b893dfac5bc961c175404e 0 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d1c6ab8e34b893dfac5bc961c175404e 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gT2 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gT2 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gT2 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=778f9ebba97c5c3961af7b5c11933c807381d6ea80c1081d923336af243813c6 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.e3Z 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 778f9ebba97c5c3961af7b5c11933c807381d6ea80c1081d923336af243813c6 3 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 778f9ebba97c5c3961af7b5c11933c807381d6ea80c1081d923336af243813c6 3 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=778f9ebba97c5c3961af7b5c11933c807381d6ea80c1081d923336af243813c6 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.e3Z 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.e3Z 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.e3Z 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91773 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91773 ']' 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.983 19:48:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.241 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.241 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:35.241 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:35.241 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1zG 00:19:35.241 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.241 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Vx9 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vx9 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.IIc 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.x87 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.x87 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0EH 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rJC ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rJC 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qTj 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gT2 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gT2 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.e3Z 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:35.499 19:48:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:35.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:35.757 Waiting for block devices as requested 00:19:35.757 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.015 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:36.579 No valid GPT data, bailing 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:36.579 No valid GPT data, bailing 00:19:36.579 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:36.836 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:36.837 No valid GPT data, bailing 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:36.837 No valid GPT data, bailing 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -a 10.0.0.1 -t tcp -s 4420 00:19:36.837 00:19:36.837 Discovery Log Number of Records 2, Generation counter 2 00:19:36.837 =====Discovery Log Entry 0====== 00:19:36.837 trtype: tcp 00:19:36.837 adrfam: ipv4 00:19:36.837 subtype: current discovery subsystem 00:19:36.837 treq: not specified, sq flow control disable supported 00:19:36.837 portid: 1 00:19:36.837 trsvcid: 4420 00:19:36.837 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:36.837 traddr: 10.0.0.1 00:19:36.837 eflags: none 00:19:36.837 sectype: none 00:19:36.837 =====Discovery Log Entry 1====== 00:19:36.837 trtype: tcp 00:19:36.837 adrfam: ipv4 00:19:36.837 subtype: nvme subsystem 00:19:36.837 treq: not specified, sq flow control disable supported 00:19:36.837 portid: 1 00:19:36.837 trsvcid: 4420 00:19:36.837 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:36.837 traddr: 10.0.0.1 00:19:36.837 eflags: none 00:19:36.837 sectype: none 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.837 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.095 nvme0n1 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.095 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.353 19:48:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.353 nvme0n1 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.353 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.614 nvme0n1 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.614 nvme0n1 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.614 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.872 nvme0n1 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.872 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.873 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.130 nvme0n1 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.130 19:48:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.387 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.388 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.645 nvme0n1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.645 nvme0n1 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.645 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.902 nvme0n1 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.902 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.160 nvme0n1 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:39.160 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.161 19:48:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.419 nvme0n1 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.419 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:39.420 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:39.420 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:39.420 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:39.420 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.420 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.986 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.244 nvme0n1 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:40.244 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.245 19:48:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.503 nvme0n1 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.503 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.761 nvme0n1 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.761 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.762 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.020 nvme0n1 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.020 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.279 nvme0n1 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:41.279 19:48:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.179 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.437 nvme0n1 00:19:43.437 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.437 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.437 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.437 19:48:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.437 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.437 19:48:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.438 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.695 nvme0n1 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.695 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.953 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.211 nvme0n1 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:44.211 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.212 19:48:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.778 nvme0n1 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:44.778 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.779 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.036 nvme0n1 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.036 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:45.293 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.294 19:48:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.860 nvme0n1 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:45.860 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.861 19:48:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.425 nvme0n1 00:19:46.425 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.425 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.425 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.425 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.425 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.425 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.683 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.246 nvme0n1 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:47.246 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.247 19:48:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.826 nvme0n1 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:47.826 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.084 19:48:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 nvme0n1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 nvme0n1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.650 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.909 nvme0n1 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.909 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.168 nvme0n1 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.168 nvme0n1 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:49.168 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.169 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.169 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.169 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.427 19:48:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 nvme0n1 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.427 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.687 nvme0n1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.687 nvme0n1 00:19:49.687 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:49.945 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 nvme0n1 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.946 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.204 nvme0n1 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.204 19:48:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.461 19:48:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:50.461 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.461 19:48:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 nvme0n1 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.461 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.718 nvme0n1 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.718 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.719 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.719 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.719 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.719 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.719 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.719 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.719 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.976 nvme0n1 00:19:50.976 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.977 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 nvme0n1 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.235 19:48:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.235 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.494 nvme0n1 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.494 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.752 nvme0n1 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:51.752 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.009 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.266 nvme0n1 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.266 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.267 19:48:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.525 nvme0n1 00:19:52.525 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.525 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.525 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.525 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.525 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.525 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:52.783 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.784 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.041 nvme0n1 00:19:53.041 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.041 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.042 19:48:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.607 nvme0n1 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.607 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.865 nvme0n1 00:19:53.865 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.865 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.865 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.865 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.865 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.866 19:48:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.431 nvme0n1 00:19:54.431 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.431 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.431 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.431 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.431 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.688 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 nvme0n1 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.255 19:48:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.851 nvme0n1 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.851 19:48:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:55.852 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.852 19:48:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.416 nvme0n1 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.416 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.981 nvme0n1 00:19:56.981 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.981 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.981 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.981 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.981 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.981 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.238 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.239 nvme0n1 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.239 19:48:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.239 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 nvme0n1 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.496 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.497 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.756 nvme0n1 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.756 nvme0n1 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.756 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.014 nvme0n1 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:58.014 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.015 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.273 nvme0n1 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.273 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.274 19:48:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.274 nvme0n1 00:19:58.274 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.274 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.274 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.274 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.274 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.274 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.533 nvme0n1 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:58.533 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 nvme0n1 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.792 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.050 nvme0n1 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:19:59.050 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.051 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.310 nvme0n1 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.310 19:48:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.310 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.569 nvme0n1 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.569 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 nvme0n1 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.827 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.828 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.086 nvme0n1 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.086 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.344 nvme0n1 00:20:00.344 19:48:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.344 19:48:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.345 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 nvme0n1 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.911 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.206 nvme0n1 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.206 19:48:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.492 nvme0n1 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.492 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.751 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.010 nvme0n1 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.010 19:48:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.268 nvme0n1 00:20:02.268 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.268 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.268 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.268 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.268 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.268 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODUyY2VhYTg0MDg0NTA2ODQ0NjRhOTY0MjgwMDk3NjQ/kx6u: 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MwYWI0MTZjOGI0MWNmYmIyMjBhYjFmNTQ1ZTlkMmUxOTIxZjA5OGIxOTljNzdkYjdhZWJiNDQ5YWY5ZDdkNHU4wos=: 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.527 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.095 nvme0n1 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.095 19:48:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.662 nvme0n1 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.662 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY0YzgzN2FmZjdlNWYwNTZmOWQwZDI5Mjk5ODdmMjTJ2PXJ: 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: ]] 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODJlZTAwODk1YmIxNDkyZDYyM2Q2NWRhYWIxY2EyMjbZarWr: 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.663 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.228 nvme0n1 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:20:04.228 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY5ODNjYmUzZWViODg1MzhkYWNlMGRiZTIxYmM0N2NkNmQ2NmU3Yjc3NzQ4MzNhS5JU5w==: 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: ]] 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDFjNmFiOGUzNGI4OTNkZmFjNWJjOTYxYzE3NTQwNGWxuH7L: 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.229 19:48:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.795 nvme0n1 00:20:04.795 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.795 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.795 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.795 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.795 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.795 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc4ZjllYmJhOTdjNWMzOTYxYWY3YjVjMTE5MzNjODA3MzgxZDZlYTgwYzEwODFkOTIzMzM2YWYyNDM4MTNjNoBSDOg=: 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.053 19:48:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.621 nvme0n1 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJjYmVjNzBkNzlmODlmMzg4MjA0ODdiN2NlZWE5OTRjZjFkZTMxMTE0ZDMxYjE4F2gFOQ==: 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2YjJjYTkzZTUwZjE5Y2FjODAxNTliYzY2NDNiYmRlNDMzMDlmY2RiYmI4OTBlvJeiHw==: 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.621 2024/07/15 19:48:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:05.621 request: 00:20:05.621 { 00:20:05.621 "method": "bdev_nvme_attach_controller", 00:20:05.621 "params": { 00:20:05.621 "name": "nvme0", 00:20:05.621 "trtype": "tcp", 00:20:05.621 "traddr": "10.0.0.1", 00:20:05.621 "adrfam": "ipv4", 00:20:05.621 "trsvcid": "4420", 00:20:05.621 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:05.621 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:05.621 "prchk_reftag": false, 00:20:05.621 "prchk_guard": false, 00:20:05.621 "hdgst": false, 00:20:05.621 "ddgst": false 00:20:05.621 } 00:20:05.621 } 00:20:05.621 Got JSON-RPC error response 00:20:05.621 GoRPCClient: error on JSON-RPC call 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:05.621 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.622 2024/07/15 19:48:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:05.622 request: 00:20:05.622 { 00:20:05.622 "method": "bdev_nvme_attach_controller", 00:20:05.622 "params": { 00:20:05.622 "name": "nvme0", 00:20:05.622 "trtype": "tcp", 00:20:05.622 "traddr": "10.0.0.1", 00:20:05.622 "adrfam": "ipv4", 00:20:05.622 "trsvcid": "4420", 00:20:05.622 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:05.622 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:05.622 "prchk_reftag": false, 00:20:05.622 "prchk_guard": false, 00:20:05.622 "hdgst": false, 00:20:05.622 "ddgst": false, 00:20:05.622 "dhchap_key": "key2" 00:20:05.622 } 00:20:05.622 } 00:20:05.622 Got JSON-RPC error response 00:20:05.622 GoRPCClient: error on JSON-RPC call 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:05.622 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.881 2024/07/15 19:48:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:05.881 request: 00:20:05.881 { 00:20:05.881 "method": "bdev_nvme_attach_controller", 00:20:05.881 "params": { 00:20:05.881 "name": "nvme0", 00:20:05.881 "trtype": "tcp", 00:20:05.881 "traddr": "10.0.0.1", 00:20:05.881 "adrfam": "ipv4", 00:20:05.881 "trsvcid": "4420", 00:20:05.881 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:05.881 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:05.881 "prchk_reftag": false, 00:20:05.881 "prchk_guard": false, 00:20:05.881 "hdgst": false, 00:20:05.881 "ddgst": false, 00:20:05.881 "dhchap_key": "key1", 00:20:05.881 "dhchap_ctrlr_key": "ckey2" 00:20:05.881 } 00:20:05.881 } 00:20:05.881 Got JSON-RPC error response 00:20:05.881 GoRPCClient: error on JSON-RPC call 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.881 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.881 rmmod nvme_tcp 00:20:05.881 rmmod nvme_fabrics 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91773 ']' 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91773 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91773 ']' 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91773 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91773 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:05.882 killing process with pid 91773 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91773' 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91773 00:20:05.882 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91773 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:06.141 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:06.400 19:48:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:06.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.967 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:07.226 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:07.226 19:48:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.1zG /tmp/spdk.key-null.IIc /tmp/spdk.key-sha256.0EH /tmp/spdk.key-sha384.qTj /tmp/spdk.key-sha512.e3Z /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:07.226 19:48:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:07.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:07.485 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:07.485 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:07.485 00:20:07.485 real 0m34.719s 00:20:07.485 user 0m31.551s 00:20:07.485 sys 0m3.890s 00:20:07.485 19:48:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.485 19:48:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.485 ************************************ 00:20:07.485 END TEST nvmf_auth_host 00:20:07.485 ************************************ 00:20:07.743 19:48:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:07.743 19:48:33 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:20:07.743 19:48:33 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:07.743 19:48:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:07.743 19:48:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.743 19:48:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.743 ************************************ 00:20:07.743 START TEST nvmf_digest 00:20:07.743 ************************************ 00:20:07.743 19:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:07.743 * Looking for test storage... 00:20:07.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:07.743 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.743 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:07.744 Cannot find device "nvmf_tgt_br" 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.744 Cannot find device "nvmf_tgt_br2" 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:07.744 Cannot find device "nvmf_tgt_br" 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:07.744 Cannot find device "nvmf_tgt_br2" 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:07.744 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.002 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.003 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:08.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:20:08.262 00:20:08.262 --- 10.0.0.2 ping statistics --- 00:20:08.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.262 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:08.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:08.262 00:20:08.262 --- 10.0.0.3 ping statistics --- 00:20:08.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.262 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:08.262 00:20:08.262 --- 10.0.0.1 ping statistics --- 00:20:08.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.262 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:08.262 ************************************ 00:20:08.262 START TEST nvmf_digest_clean 00:20:08.262 ************************************ 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93360 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93360 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93360 ']' 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.262 19:48:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:08.262 [2024-07-15 19:48:33.909903] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:08.262 [2024-07-15 19:48:33.910016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.520 [2024-07-15 19:48:34.053355] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.520 [2024-07-15 19:48:34.149860] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.520 [2024-07-15 19:48:34.149925] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.520 [2024-07-15 19:48:34.149966] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.520 [2024-07-15 19:48:34.149986] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.520 [2024-07-15 19:48:34.149996] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.520 [2024-07-15 19:48:34.150039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.455 19:48:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:09.455 null0 00:20:09.455 [2024-07-15 19:48:35.070854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.455 [2024-07-15 19:48:35.094985] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93411 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93411 /var/tmp/bperf.sock 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93411 ']' 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.455 19:48:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:09.455 [2024-07-15 19:48:35.161385] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:09.455 [2024-07-15 19:48:35.161488] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93411 ] 00:20:09.714 [2024-07-15 19:48:35.301636] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.714 [2024-07-15 19:48:35.428091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.665 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.666 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:10.666 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:10.666 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:10.666 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:10.924 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:10.924 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:11.183 nvme0n1 00:20:11.441 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:11.441 19:48:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:11.441 Running I/O for 2 seconds... 00:20:13.969 00:20:13.969 Latency(us) 00:20:13.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.969 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:13.969 nvme0n1 : 2.00 18649.55 72.85 0.00 0.00 6855.20 3247.01 15371.17 00:20:13.969 =================================================================================================================== 00:20:13.969 Total : 18649.55 72.85 0.00 0.00 6855.20 3247.01 15371.17 00:20:13.969 0 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:13.969 | select(.opcode=="crc32c") 00:20:13.969 | "\(.module_name) \(.executed)"' 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93411 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93411 ']' 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93411 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93411 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:13.969 killing process with pid 93411 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93411' 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93411 00:20:13.969 Received shutdown signal, test time was about 2.000000 seconds 00:20:13.969 00:20:13.969 Latency(us) 00:20:13.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.969 =================================================================================================================== 00:20:13.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93411 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93501 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93501 /var/tmp/bperf.sock 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93501 ']' 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.969 19:48:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:13.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:13.969 Zero copy mechanism will not be used. 00:20:13.969 [2024-07-15 19:48:39.722416] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:13.969 [2024-07-15 19:48:39.722541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93501 ] 00:20:14.228 [2024-07-15 19:48:39.859812] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.228 [2024-07-15 19:48:39.967514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.165 19:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.165 19:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:15.165 19:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:15.165 19:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:15.165 19:48:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:15.424 19:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.424 19:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.682 nvme0n1 00:20:15.682 19:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:15.682 19:48:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:15.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:15.959 Zero copy mechanism will not be used. 00:20:15.959 Running I/O for 2 seconds... 00:20:17.860 00:20:17.860 Latency(us) 00:20:17.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.860 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:17.860 nvme0n1 : 2.00 8269.82 1033.73 0.00 0.00 1930.73 755.90 8102.63 00:20:17.860 =================================================================================================================== 00:20:17.860 Total : 8269.82 1033.73 0.00 0.00 1930.73 755.90 8102.63 00:20:17.860 0 00:20:17.860 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:17.860 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:17.860 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:17.860 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:17.860 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:17.860 | select(.opcode=="crc32c") 00:20:17.860 | "\(.module_name) \(.executed)"' 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93501 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93501 ']' 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93501 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93501 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:18.118 killing process with pid 93501 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93501' 00:20:18.118 Received shutdown signal, test time was about 2.000000 seconds 00:20:18.118 00:20:18.118 Latency(us) 00:20:18.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.118 =================================================================================================================== 00:20:18.118 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93501 00:20:18.118 19:48:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93501 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93596 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93596 /var/tmp/bperf.sock 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93596 ']' 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.376 19:48:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:18.376 [2024-07-15 19:48:44.079081] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:18.376 [2024-07-15 19:48:44.079192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93596 ] 00:20:18.634 [2024-07-15 19:48:44.210959] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.634 [2024-07-15 19:48:44.318025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.568 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.568 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:19.568 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:19.568 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:19.568 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:19.827 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:19.827 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.085 nvme0n1 00:20:20.085 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:20.086 19:48:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:20.086 Running I/O for 2 seconds... 00:20:22.052 00:20:22.052 Latency(us) 00:20:22.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.052 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:22.052 nvme0n1 : 2.00 24363.97 95.17 0.00 0.00 5248.28 2234.18 12213.53 00:20:22.052 =================================================================================================================== 00:20:22.052 Total : 24363.97 95.17 0.00 0.00 5248.28 2234.18 12213.53 00:20:22.052 0 00:20:22.052 19:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:22.052 19:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:22.052 19:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:22.052 | select(.opcode=="crc32c") 00:20:22.052 | "\(.module_name) \(.executed)"' 00:20:22.052 19:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:22.052 19:48:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93596 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93596 ']' 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93596 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93596 00:20:22.311 killing process with pid 93596 00:20:22.311 Received shutdown signal, test time was about 2.000000 seconds 00:20:22.311 00:20:22.311 Latency(us) 00:20:22.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.311 =================================================================================================================== 00:20:22.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93596' 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93596 00:20:22.311 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93596 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93682 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93682 /var/tmp/bperf.sock 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93682 ']' 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:22.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.569 19:48:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:22.569 [2024-07-15 19:48:48.325496] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:22.569 [2024-07-15 19:48:48.325795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:20:22.569 Zero copy mechanism will not be used. 00:20:22.569 llocations --file-prefix=spdk_pid93682 ] 00:20:22.827 [2024-07-15 19:48:48.462731] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.827 [2024-07-15 19:48:48.559980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.762 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.762 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:23.762 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:23.762 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:23.762 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:24.019 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:24.019 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:24.277 nvme0n1 00:20:24.277 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:24.277 19:48:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:24.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:24.277 Zero copy mechanism will not be used. 00:20:24.277 Running I/O for 2 seconds... 00:20:26.802 00:20:26.802 Latency(us) 00:20:26.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.802 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:26.802 nvme0n1 : 2.00 6002.82 750.35 0.00 0.00 2659.93 2055.45 11856.06 00:20:26.802 =================================================================================================================== 00:20:26.802 Total : 6002.82 750.35 0.00 0.00 2659.93 2055.45 11856.06 00:20:26.802 0 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:26.802 | select(.opcode=="crc32c") 00:20:26.802 | "\(.module_name) \(.executed)"' 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93682 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93682 ']' 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93682 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93682 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:26.802 killing process with pid 93682 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93682' 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93682 00:20:26.802 Received shutdown signal, test time was about 2.000000 seconds 00:20:26.802 00:20:26.802 Latency(us) 00:20:26.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.802 =================================================================================================================== 00:20:26.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93682 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93360 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93360 ']' 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93360 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93360 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:26.802 killing process with pid 93360 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93360' 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93360 00:20:26.802 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93360 00:20:27.060 00:20:27.060 real 0m18.915s 00:20:27.060 user 0m36.091s 00:20:27.060 sys 0m4.744s 00:20:27.060 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.060 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:27.061 ************************************ 00:20:27.061 END TEST nvmf_digest_clean 00:20:27.061 ************************************ 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:27.061 ************************************ 00:20:27.061 START TEST nvmf_digest_error 00:20:27.061 ************************************ 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93795 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93795 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93795 ']' 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.061 19:48:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:27.319 [2024-07-15 19:48:52.864904] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:27.319 [2024-07-15 19:48:52.865004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.319 [2024-07-15 19:48:52.997596] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.319 [2024-07-15 19:48:53.093539] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.319 [2024-07-15 19:48:53.093604] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.319 [2024-07-15 19:48:53.093633] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.319 [2024-07-15 19:48:53.093641] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.319 [2024-07-15 19:48:53.093649] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.319 [2024-07-15 19:48:53.093676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:28.252 [2024-07-15 19:48:53.830259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:28.252 null0 00:20:28.252 [2024-07-15 19:48:53.940886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.252 [2024-07-15 19:48:53.965043] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93839 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93839 /var/tmp/bperf.sock 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93839 ']' 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.252 19:48:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:28.252 [2024-07-15 19:48:54.017958] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:28.252 [2024-07-15 19:48:54.018037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93839 ] 00:20:28.511 [2024-07-15 19:48:54.144973] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.511 [2024-07-15 19:48:54.236699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.445 19:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.445 19:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:29.445 19:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:29.445 19:48:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:29.445 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:29.445 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.445 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:29.445 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.445 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:29.446 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.074 nvme0n1 00:20:30.074 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:30.074 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.074 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.074 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.074 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:30.074 19:48:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:30.074 Running I/O for 2 seconds... 00:20:30.074 [2024-07-15 19:48:55.667456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.074 [2024-07-15 19:48:55.667525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.074 [2024-07-15 19:48:55.667563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.074 [2024-07-15 19:48:55.681477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.074 [2024-07-15 19:48:55.681545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.074 [2024-07-15 19:48:55.681574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.074 [2024-07-15 19:48:55.695073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.074 [2024-07-15 19:48:55.695135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.074 [2024-07-15 19:48:55.695166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.074 [2024-07-15 19:48:55.706994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.707050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.707079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.720525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.720568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.720583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.734208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.734247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.734263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.747739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.747782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.747797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.761228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.761283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.761313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.772962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.773017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.773045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.786881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.786937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.786965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.798986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.799041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.799071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.809740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.809797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.809826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.823624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.823683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.823697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.835977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.836035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.836065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.075 [2024-07-15 19:48:55.849901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.075 [2024-07-15 19:48:55.849980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.075 [2024-07-15 19:48:55.849995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.333 [2024-07-15 19:48:55.861989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.333 [2024-07-15 19:48:55.862031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.862061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.875926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.875966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.875995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.887916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.887958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.887988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.902082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.902125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.902156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.913580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.913619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.913649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.927298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.927337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.927382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.941501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.941544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.941574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.954919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.954958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.954987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.967826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.967871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.967886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.982813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.982854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.982884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:55.993378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:55.993418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:55.993447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.007976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.008016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.008045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.021493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.021546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.021575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.034854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.034894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.034923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.047288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.047327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.047357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.059019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.059058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.059088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.074129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.074216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.074231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.085627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.085666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.085696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.098510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.098549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.334 [2024-07-15 19:48:56.112146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.334 [2024-07-15 19:48:56.112238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.334 [2024-07-15 19:48:56.112270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.593 [2024-07-15 19:48:56.126357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.593 [2024-07-15 19:48:56.126397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.593 [2024-07-15 19:48:56.126426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.593 [2024-07-15 19:48:56.138185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.593 [2024-07-15 19:48:56.138225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.593 [2024-07-15 19:48:56.138239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.593 [2024-07-15 19:48:56.152046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.593 [2024-07-15 19:48:56.152086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.593 [2024-07-15 19:48:56.152116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.593 [2024-07-15 19:48:56.165983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.593 [2024-07-15 19:48:56.166028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.593 [2024-07-15 19:48:56.166058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.175769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.175809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.175839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.190781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.190823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.190853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.204794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.204835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.204851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.218319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.218364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.218380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.233372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.233412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.233427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.244665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.244706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.244738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.257754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.257794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.257824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.273290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.273332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.273364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.286961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.287004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.287019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.298813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.298856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.298886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.313474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.313516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.313546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.326675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.326715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.326745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.342753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.342803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.342834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.355693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.355737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.355753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.594 [2024-07-15 19:48:56.368334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.594 [2024-07-15 19:48:56.368377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.594 [2024-07-15 19:48:56.368391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.382833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.382877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.382892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.395908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.395951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.395981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.410761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.410803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.410834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.424365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.424405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.424435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.438278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.438340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.438371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.453041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.453087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.453119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.466767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.466808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.466838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.479335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.479409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.853 [2024-07-15 19:48:56.479439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.853 [2024-07-15 19:48:56.493516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.853 [2024-07-15 19:48:56.493556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.493587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.506360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.506400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.506430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.517772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.517812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.517842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.531437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.531477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.531508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.544617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.544656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.544686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.558073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.558113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.558143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.568671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.568710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.568741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.583577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.583621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.583651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.595417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.595457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.595487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.608029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.608073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.608087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.854 [2024-07-15 19:48:56.620907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:30.854 [2024-07-15 19:48:56.620950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.854 [2024-07-15 19:48:56.620966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.638959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.639005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.639020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.651316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.651360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.651375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.665614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.665657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.665672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.678961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.679005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.679020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.694681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.694725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.694740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.708038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.708082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.708097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.724456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.724499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.724515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.738503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.738547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.738562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.750489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.750532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.750547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.765401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.765444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.765459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.778801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.778850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.778866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.794035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.794094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.809065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.809109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.809124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.820733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.820776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.820792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.835507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.835563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.835578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.850293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.850334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.850349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.865184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.865226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.865241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.878940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.878983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.878998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.114 [2024-07-15 19:48:56.892410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.114 [2024-07-15 19:48:56.892453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.114 [2024-07-15 19:48:56.892469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:56.905360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:56.905403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:56.905417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:56.921221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:56.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:56.921279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:56.934664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:56.934708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:56.934724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:56.948823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:56.948879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:56.948895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:56.964515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:56.964559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:56.964583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:56.978650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:56.978694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:56.978709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:56.992943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:56.992998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:56.993014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:57.008599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:57.008648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:57.008664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:57.022237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:57.022279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:57.022294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:57.036825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:57.036868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:57.036883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:57.051530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:57.051574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:57.051589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:57.066387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:57.066429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:57.066444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:57.079011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:57.079053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.373 [2024-07-15 19:48:57.079068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.373 [2024-07-15 19:48:57.092330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.373 [2024-07-15 19:48:57.092372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.374 [2024-07-15 19:48:57.092387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.374 [2024-07-15 19:48:57.106992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.374 [2024-07-15 19:48:57.107040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.374 [2024-07-15 19:48:57.107055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.374 [2024-07-15 19:48:57.122533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.374 [2024-07-15 19:48:57.122578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.374 [2024-07-15 19:48:57.122594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.374 [2024-07-15 19:48:57.134299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.374 [2024-07-15 19:48:57.134352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.374 [2024-07-15 19:48:57.134367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.374 [2024-07-15 19:48:57.149465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.374 [2024-07-15 19:48:57.149508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.374 [2024-07-15 19:48:57.149523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.165072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.165117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.165131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.179366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.179408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.179423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.192377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.192419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.192435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.205036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.205082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.205097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.220598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.220643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.220666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.232943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.232985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.232999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.247859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.247905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.247920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.262113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.262172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.262188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.276791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.276835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.276851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.291082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.291127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.291142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.305125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.305177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.305193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.319453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.319495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.319510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.333352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.333394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.333408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.348270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.348322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.348338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.361667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.361711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.361726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.375378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.375420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.375435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.388023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.388068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.388083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.633 [2024-07-15 19:48:57.403845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.633 [2024-07-15 19:48:57.403889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.633 [2024-07-15 19:48:57.403904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.417348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.417390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.417405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.432134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.432185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.432205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.446126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.446181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.446197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.457736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.457780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.457795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.472623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.472667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.472682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.486612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.486656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.486674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.500448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.500492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.500507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.514738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.514782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.514797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.529442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.529485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.529500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.541700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.541749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.541765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.558224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.558266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.558282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.572408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.572450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.572464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.587217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.587259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.587274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.601734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.601777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.601792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.614761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.614805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.614820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.893 [2024-07-15 19:48:57.627777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.893 [2024-07-15 19:48:57.627822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.893 [2024-07-15 19:48:57.627837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.894 [2024-07-15 19:48:57.642937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220ae10) 00:20:31.894 [2024-07-15 19:48:57.642983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.894 [2024-07-15 19:48:57.642999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.894 00:20:31.894 Latency(us) 00:20:31.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.894 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:31.894 nvme0n1 : 2.00 18579.84 72.58 0.00 0.00 6882.37 3470.43 19422.49 00:20:31.894 =================================================================================================================== 00:20:31.894 Total : 18579.84 72.58 0.00 0.00 6882.37 3470.43 19422.49 00:20:31.894 0 00:20:31.894 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:31.894 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:31.894 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:31.894 | .driver_specific 00:20:31.894 | .nvme_error 00:20:31.894 | .status_code 00:20:31.894 | .command_transient_transport_error' 00:20:31.894 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:32.152 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:20:32.152 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93839 00:20:32.152 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93839 ']' 00:20:32.152 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93839 00:20:32.152 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:32.152 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.152 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93839 00:20:32.410 killing process with pid 93839 00:20:32.410 Received shutdown signal, test time was about 2.000000 seconds 00:20:32.410 00:20:32.410 Latency(us) 00:20:32.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.410 =================================================================================================================== 00:20:32.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.410 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:32.410 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:32.410 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93839' 00:20:32.410 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93839 00:20:32.410 19:48:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93839 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93924 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93924 /var/tmp/bperf.sock 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93924 ']' 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.410 19:48:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:32.668 [2024-07-15 19:48:58.251818] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:32.668 [2024-07-15 19:48:58.252268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93924 ] 00:20:32.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:32.668 Zero copy mechanism will not be used. 00:20:32.668 [2024-07-15 19:48:58.401936] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.927 [2024-07-15 19:48:58.503093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.493 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.493 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:33.493 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:33.493 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:33.751 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:33.751 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.751 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:33.751 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.752 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:33.752 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.010 nvme0n1 00:20:34.011 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:34.011 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.011 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.270 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:34.270 19:48:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:34.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:34.270 Zero copy mechanism will not be used. 00:20:34.270 Running I/O for 2 seconds... 00:20:34.270 [2024-07-15 19:48:59.925964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.926023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.926041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.930233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.930276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.930292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.935275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.935317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.935333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.939770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.939814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.939830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.943029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.943072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.943088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.947631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.947676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.947692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.951807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.951852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.951868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.954983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.955027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.955043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.959850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.959894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.959910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.963839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.963888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.963904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.967743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.967787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.967803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.971602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.971647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.971662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.976558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.270 [2024-07-15 19:48:59.976603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.270 [2024-07-15 19:48:59.976618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.270 [2024-07-15 19:48:59.980826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:48:59.980870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:48:59.980885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:48:59.983917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:48:59.983960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:48:59.983975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:48:59.988563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:48:59.988608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:48:59.988623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:48:59.993531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:48:59.993575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:48:59.993591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:48:59.997248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:48:59.997294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:48:59.997309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.001314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.001363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.006399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.006443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.006459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.009572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.009615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.009632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.014245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.014288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.014304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.019182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.019233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.019257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.023978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.024022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.024038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.028266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.028308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.028324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.030949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.030992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.031007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.035824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.035873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.035891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.039757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.039802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.039818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.043474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.043516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.043531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.047197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.047248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.047267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.271 [2024-07-15 19:49:00.051042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.271 [2024-07-15 19:49:00.051086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.271 [2024-07-15 19:49:00.051102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.055403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.055448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.055463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.058923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.058968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.058983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.063617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.063660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.063675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.066971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.067015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.067030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.071121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.071185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.071207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.074970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.075015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.075031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.078715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.078880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.078900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.082701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.082871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.082891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.087555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.087742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.087883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.090815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.091002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.091144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.096504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.096692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.096826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.101551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.101734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.101865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.105421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.105625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.105764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.109848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.110059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.110283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.114322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.114364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.114380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.119553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.119597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.119614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.122876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.122919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.122934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.127126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.127180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.127197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.131663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.131707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.131723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.136227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.136270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.136286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.139919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.139963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.139978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.144394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.144439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.144455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.148331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.148373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.148388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.151696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.531 [2024-07-15 19:49:00.151746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.531 [2024-07-15 19:49:00.151762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.531 [2024-07-15 19:49:00.156009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.156054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.156070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.160031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.160075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.160091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.163771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.163816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.163831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.168069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.168113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.168128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.170993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.171036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.171052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.175654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.175698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.175714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.179583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.179627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.179642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.183756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.183799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.183815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.187959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.188150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.188282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.192406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.192597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.192728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.197015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.197227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.197362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.202573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.202617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.202633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.205875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.205919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.205939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.210204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.210247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.210263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.214431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.214475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.214490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.218285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.218329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.218345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.222311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.222363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.222380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.225982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.226025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.226041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.229756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.229799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.229814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.234030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.234074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.234090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.238491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.238534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.238549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.241828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.241875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.241891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.246232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.246272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.246288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.249598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.249639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.249654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.253136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.253190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.257598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.257640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.257655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.261899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.261942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.261969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.266114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.266175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.266191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.269585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.532 [2024-07-15 19:49:00.269628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.532 [2024-07-15 19:49:00.269643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.532 [2024-07-15 19:49:00.274256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.274297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.274313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.278219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.278261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.278277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.281500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.281541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.281556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.285851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.285900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.285917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.289809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.289851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.289866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.293353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.293395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.293410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.297928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.297987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.298003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.301879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.301926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.301941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.305583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.305626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.305641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.533 [2024-07-15 19:49:00.309971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.533 [2024-07-15 19:49:00.310014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.533 [2024-07-15 19:49:00.310029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.313857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.313898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.313913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.318371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.318415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.318430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.321832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.321873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.321889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.326253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.326295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.326310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.330266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.330317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.330332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.333935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.334002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.334018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.337825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.337869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.337884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.342474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.342518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.342534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.347171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.347211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.347226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.350307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.350349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.350364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.354935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.354980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.358456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.358500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.358515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.362781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.362824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.362839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.793 [2024-07-15 19:49:00.366199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.793 [2024-07-15 19:49:00.366242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.793 [2024-07-15 19:49:00.366257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.370618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.370665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.370681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.374714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.374759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.374775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.379391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.379432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.379447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.382841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.382884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.382900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.387330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.387372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.387387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.390853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.390900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.390921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.394943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.394986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.395004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.398789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.398833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.398848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.403051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.403096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.403112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.406737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.406781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.406797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.411072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.411117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.411132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.414785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.414830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.414845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.418282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.418324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.418339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.421361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.421402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.421418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.425876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.425920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.425936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.429416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.429458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.429473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.433460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.433503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.433518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.438041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.438093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.438115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.441289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.441330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.441345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.445836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.445880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.445896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.450594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.450785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.450898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.454900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.454946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.454962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.458431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.458475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.458490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.463406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.463448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.463463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.467786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.467829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.467845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.470955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.470996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.471011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.475757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.475812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.475828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.480771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.480813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.794 [2024-07-15 19:49:00.480829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.794 [2024-07-15 19:49:00.484099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.794 [2024-07-15 19:49:00.484141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.484174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.488418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.488461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.488476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.493366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.493407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.493423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.496801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.496843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.496858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.500718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.500760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.500776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.505625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.505668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.505683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.510340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.510390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.510406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.513446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.513488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.513502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.518403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.518447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.518463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.523015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.523061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.523076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.526061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.526101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.526116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.530436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.530480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.530496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.534234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.534277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.534292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.538704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.538748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.538763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.542973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.543017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.543032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.547327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.547368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.547384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.550750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.550791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.550806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.555087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.555132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.555147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.558903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.558948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.558964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.563452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.563495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.563510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.568270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.568313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.568328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.795 [2024-07-15 19:49:00.571115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:34.795 [2024-07-15 19:49:00.571170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.795 [2024-07-15 19:49:00.571186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.576110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.576173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.576190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.579780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.579824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.579846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.584366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.584408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.584423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.589102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.589145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.589172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.592737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.592777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.592793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.596949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.596991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.597006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.602135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.602188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.602204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.607373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.607415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.607430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.610916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.610959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.610974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.614274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.614316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.614330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.618842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.618887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.618902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.622002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.622045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.622060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.626513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.626558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.626573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.629643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.629685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.629701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.633739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.633781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.633797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.637900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.637942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.637966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.641899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.641941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.641968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.646371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.646413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.646428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.649742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.649784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.056 [2024-07-15 19:49:00.649799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.056 [2024-07-15 19:49:00.653356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.056 [2024-07-15 19:49:00.653404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.653420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.657568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.657610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.657625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.661982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.662025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.662040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.664991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.665032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.665047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.669412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.669453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.669469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.673374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.673415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.673430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.677072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.677115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.677130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.681050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.681092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.681108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.685677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.685720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.685735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.689422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.689464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.689479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.693005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.693048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.693063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.696725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.696768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.696783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.701392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.701435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.701451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.704986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.705029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.705044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.708650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.708692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.708708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.712752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.712795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.712811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.716274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.716315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.716330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.720559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.720603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.720618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.724523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.724566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.724581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.728187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.728227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.728242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.732318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.732360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.732376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.736086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.736131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.736146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.739552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.739595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.739611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.743271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.743314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.743329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.747518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.747559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.747575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.751757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.751800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.751816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.755744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.755788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.755803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.759834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.759878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.759893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.763393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.763435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.057 [2024-07-15 19:49:00.763451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.057 [2024-07-15 19:49:00.768087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.057 [2024-07-15 19:49:00.768129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.768145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.771762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.771805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.771820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.775351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.775394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.775410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.778988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.779032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.779047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.782721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.782764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.782780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.786965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.787008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.787023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.790803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.790847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.790862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.794997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.795040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.795055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.799419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.799602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.803755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.803943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.804089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.808147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.808204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.808219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.812335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.812378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.812393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.816679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.816722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.816737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.820766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.820808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.820824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.824454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.824499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.824515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.829120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.829184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.829201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.058 [2024-07-15 19:49:00.833673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.058 [2024-07-15 19:49:00.833717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.058 [2024-07-15 19:49:00.833733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.836803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.836845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.836860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.841217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.841259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.841274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.845379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.845422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.845438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.849009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.849052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.849067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.853274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.853316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.853332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.857918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.857980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.857997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.860960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.861002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.861018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.865374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.865416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.865432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.869986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.870029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.870044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.873332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.873374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.873389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.877964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.878004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.878020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.881983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.882025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.882040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.885131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.885187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.890353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.890396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.890411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.895111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.895167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.895184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.318 [2024-07-15 19:49:00.898488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.318 [2024-07-15 19:49:00.898530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.318 [2024-07-15 19:49:00.898545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.903270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.903312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.903327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.907169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.907210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.907225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.910487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.910530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.910546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.915282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.915322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.915338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.920305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.920348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.920363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.923629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.923671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.923687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.927870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.927912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.927927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.932053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.932097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.932112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.935721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.935764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.935780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.939748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.939790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.939805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.943485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.943527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.943543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.948071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.948114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.948129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.952249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.952294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.952309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.956085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.956130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.956146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.960757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.960801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.960816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.964188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.964237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.964252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.967990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.968043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.968058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.972338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.972381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.972396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.976233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.976274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.976289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.980364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.980408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.980423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.983538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.983579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.983597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.987578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.987621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.987637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.992080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.992123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.992139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:00.996028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:00.996071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:00.996086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:01.000243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:01.000286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:01.000302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:01.003758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:01.003811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:01.003841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:01.007615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:01.007658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:01.007673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:01.011576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:01.011620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.319 [2024-07-15 19:49:01.011635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.319 [2024-07-15 19:49:01.015750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.319 [2024-07-15 19:49:01.015793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.015808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.019996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.020039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.020054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.024239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.024280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.024295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.028001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.028045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.028061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.031232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.031273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.031288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.035449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.035494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.035510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.039975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.040019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.040034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.043508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.043550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.043565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.047854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.047901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.047917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.051417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.051462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.051477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.055517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.055561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.055577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.059450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.059494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.059509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.063550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.063593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.063608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.068433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.068477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.068491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.073006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.073047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.073062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.076332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.076373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.076388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.080745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.080790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.080807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.083757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.083799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.083821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.087781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.087824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.087839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.091547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.091600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.091616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.320 [2024-07-15 19:49:01.095739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.320 [2024-07-15 19:49:01.095783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.320 [2024-07-15 19:49:01.095799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.099633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.099676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.099691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.103800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.103842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.103857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.107733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.107776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.107791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.111493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.111536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.111553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.115647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.115696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.115711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.120353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.120396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.120411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.124897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.124941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.124957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.129106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.129149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.129179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.133354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.133403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.133420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.137667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.137711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.137727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.141394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.141435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.141450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.145323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.145365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.145380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.148978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.149019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.149035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.153427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.153470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.153486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.156927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.156969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.156985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.581 [2024-07-15 19:49:01.160703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.581 [2024-07-15 19:49:01.160744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.581 [2024-07-15 19:49:01.160759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.164587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.164630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.164645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.169457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.169499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.169515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.172738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.172780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.172796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.176649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.176690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.176706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.181081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.181123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.181138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.184792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.184836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.184859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.188605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.188648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.188663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.192200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.192240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.192255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.195882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.195925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.195940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.199398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.199445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.199460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.203668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.203713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.203729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.208460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.208502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.208517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.212134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.212188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.212204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.216483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.216526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.216541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.219682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.219720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.219735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.224052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.224094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.224110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.227762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.227805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.227821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.231229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.231272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.231287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.235663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.235707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.235722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.239542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.239586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.239602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.243834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.243876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.243890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.248215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.248271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.248287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.251475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.251518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.251533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.255817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.255861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.255876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.259483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.259527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.259542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.263547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.263590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.263606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.267089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.267132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.267148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.270903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.270947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.582 [2024-07-15 19:49:01.270962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.582 [2024-07-15 19:49:01.274690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.582 [2024-07-15 19:49:01.274735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.274751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.279214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.279255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.279271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.283486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.283533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.283548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.287430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.287473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.287489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.291357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.291399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.291414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.295242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.295278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.295293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.299479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.299523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.299539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.303607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.303647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.303663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.307834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.307877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.307893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.312313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.312359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.312374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.316077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.316122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.316138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.320557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.320599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.320615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.325505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.325547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.325561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.328506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.328546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.328561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.333656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.333698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.333714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.337657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.337698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.337713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.341448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.341490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.341505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.345454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.345497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.345512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.349358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.349399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.349414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.354420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.354462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.354477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.583 [2024-07-15 19:49:01.357496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.583 [2024-07-15 19:49:01.357538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.583 [2024-07-15 19:49:01.357553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.362079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.362123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.362138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.367226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.367271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.367286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.370477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.370519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.370534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.374431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.374474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.374489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.379101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.379145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.379173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.383436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.383477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.383492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.386678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.386722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.386737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.391281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.391325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.391341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.396084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.396126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.396141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.399035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.399077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.399092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.404082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.404125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.404141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.408358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.408403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.412094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.412138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.412167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.416264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.416307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.416323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.420067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.420111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.420126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.424182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.424222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.424238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.428173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.428216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.428231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.432976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.433022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.433037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.436327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.436368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.436383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.441588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.441633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.441648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.446354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.446396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.446411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.449877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.449922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.449937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.453741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.453782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.453798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.457017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.457064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.457079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.461490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.844 [2024-07-15 19:49:01.461533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.844 [2024-07-15 19:49:01.461549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.844 [2024-07-15 19:49:01.465721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.465762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.465778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.469430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.469475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.469491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.473691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.473736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.473751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.477644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.477686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.477701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.481928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.481984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.482000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.485767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.485809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.485824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.490243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.490286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.490301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.494717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.494761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.494776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.497967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.498007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.498022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.502396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.502568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.502588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.507065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.507110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.507128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.511583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.511628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.511644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.514839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.514883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.514898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.518884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.518929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.518944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.523709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.523752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.523768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.528370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.528414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.528430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.531891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.531933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.531949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.536215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.536258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.536273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.539506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.539549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.539571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.543975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.544020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.544036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.548592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.548634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.548650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.551775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.551817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.551833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.556279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.556468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.556582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.560210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.560252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.560275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.564057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.564101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.564116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.568190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.568231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.568247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.572143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.572197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.572213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.575909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.575955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.575971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.845 [2024-07-15 19:49:01.580401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.845 [2024-07-15 19:49:01.580444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.845 [2024-07-15 19:49:01.580459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.583714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.583758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.583773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.588213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.588254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.588270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.593285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.593328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.593343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.596657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.596700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.596715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.601355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.601394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.601425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.605351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.605391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.605422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.608871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.608912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.608958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.612905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.613109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.613128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.617232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.617452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.617633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.846 [2024-07-15 19:49:01.621744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:35.846 [2024-07-15 19:49:01.621967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.846 [2024-07-15 19:49:01.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.106 [2024-07-15 19:49:01.626387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.106 [2024-07-15 19:49:01.626612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.106 [2024-07-15 19:49:01.626794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.106 [2024-07-15 19:49:01.631005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.106 [2024-07-15 19:49:01.631242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.631390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.635458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.635668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.635803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.639453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.639652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.639786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.644085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.644127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.644158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.648560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.648601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.648630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.653043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.653082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.653113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.656120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.656184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.656216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.660702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.660744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.660774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.665243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.665281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.665313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.669015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.669058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.669090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.673171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.673220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.673252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.677277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.677319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.677350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.681552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.681591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.681621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.686406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.686448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.686478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.689619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.689657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.689688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.694137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.694187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.694203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.698633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.698672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.698708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.702540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.702578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.702609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.706328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.706369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.706400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.710984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.711026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.711057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.714675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.714716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.714753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.718941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.719012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.723281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.723320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.723351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.726495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.726545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.726576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.730782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.730823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.730854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.735008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.735050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.735081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.738431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.738472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.738502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.743115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.743182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.743198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.107 [2024-07-15 19:49:01.748017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.107 [2024-07-15 19:49:01.748059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.107 [2024-07-15 19:49:01.748089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.752433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.752473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.752488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.755410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.755450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.755482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.759645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.759715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.763414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.763454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.763485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.767855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.767896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.767926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.771167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.771228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.771243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.775481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.775523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.775554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.779766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.779808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.779839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.783327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.783363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.783393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.787596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.787637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.787668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.792323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.792362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.792376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.796184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.796233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.796263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.799745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.799799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.799830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.804665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.804702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.804732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.809058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.809095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.809125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.813227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.813262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.813291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.816420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.816459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.816489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.820154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.820234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.820250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.824313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.824354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.824384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.828835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.828875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.828905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.833387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.833607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.833644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.836591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.836632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.836663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.841145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.841210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.841225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.845559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.845597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.845627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.848708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.848746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.848776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.852971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.853007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.853038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.856443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.856481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.856511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.860363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.108 [2024-07-15 19:49:01.860404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.108 [2024-07-15 19:49:01.860435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.108 [2024-07-15 19:49:01.864584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.109 [2024-07-15 19:49:01.864621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.109 [2024-07-15 19:49:01.864651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.109 [2024-07-15 19:49:01.867970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.109 [2024-07-15 19:49:01.868007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.109 [2024-07-15 19:49:01.868037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.109 [2024-07-15 19:49:01.871875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.109 [2024-07-15 19:49:01.871914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.109 [2024-07-15 19:49:01.871944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.109 [2024-07-15 19:49:01.875680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.109 [2024-07-15 19:49:01.875717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.109 [2024-07-15 19:49:01.875747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.109 [2024-07-15 19:49:01.878930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.109 [2024-07-15 19:49:01.878967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.109 [2024-07-15 19:49:01.878996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.109 [2024-07-15 19:49:01.883359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.109 [2024-07-15 19:49:01.883411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.109 [2024-07-15 19:49:01.883440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.887225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.887264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.887294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.891354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.891426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.891457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.894964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.895003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.895033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.898587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.898625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.898654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.902729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.902767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.902812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.906342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.906376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.906388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.910338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.910373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.910386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.914202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.914237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.914250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.368 [2024-07-15 19:49:01.918832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9230e0) 00:20:36.368 [2024-07-15 19:49:01.918881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.368 [2024-07-15 19:49:01.918894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.368 00:20:36.368 Latency(us) 00:20:36.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.368 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:36.368 nvme0n1 : 2.00 7592.11 949.01 0.00 0.00 2103.15 633.02 6225.92 00:20:36.368 =================================================================================================================== 00:20:36.368 Total : 7592.11 949.01 0.00 0.00 2103.15 633.02 6225.92 00:20:36.369 0 00:20:36.369 19:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:36.369 19:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:36.369 19:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:36.369 | .driver_specific 00:20:36.369 | .nvme_error 00:20:36.369 | .status_code 00:20:36.369 | .command_transient_transport_error' 00:20:36.369 19:49:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 490 > 0 )) 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93924 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93924 ']' 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93924 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93924 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:36.628 killing process with pid 93924 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93924' 00:20:36.628 Received shutdown signal, test time was about 2.000000 seconds 00:20:36.628 00:20:36.628 Latency(us) 00:20:36.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.628 =================================================================================================================== 00:20:36.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93924 00:20:36.628 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93924 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94020 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94020 /var/tmp/bperf.sock 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94020 ']' 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.887 19:49:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 [2024-07-15 19:49:02.537436] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:36.887 [2024-07-15 19:49:02.537513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94020 ] 00:20:37.146 [2024-07-15 19:49:02.670913] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.146 [2024-07-15 19:49:02.776034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.080 19:49:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.338 nvme0n1 00:20:38.338 19:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:38.338 19:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.338 19:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.596 19:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.596 19:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:38.596 19:49:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:38.596 Running I/O for 2 seconds... 00:20:38.596 [2024-07-15 19:49:04.256477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f6458 00:20:38.596 [2024-07-15 19:49:04.257235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.596 [2024-07-15 19:49:04.257265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:38.596 [2024-07-15 19:49:04.270497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f96f8 00:20:38.596 [2024-07-15 19:49:04.271416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.596 [2024-07-15 19:49:04.271451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:38.596 [2024-07-15 19:49:04.281550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fc998 00:20:38.596 [2024-07-15 19:49:04.283335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.596 [2024-07-15 19:49:04.283371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:38.596 [2024-07-15 19:49:04.294868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ee5c8 00:20:38.596 [2024-07-15 19:49:04.296299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.596 [2024-07-15 19:49:04.296335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:38.596 [2024-07-15 19:49:04.304467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e7c50 00:20:38.596 [2024-07-15 19:49:04.305218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.597 [2024-07-15 19:49:04.305253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:38.597 [2024-07-15 19:49:04.318772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190dfdc0 00:20:38.597 [2024-07-15 19:49:04.320353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.597 [2024-07-15 19:49:04.320385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:38.597 [2024-07-15 19:49:04.329329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fc998 00:20:38.597 [2024-07-15 19:49:04.331111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.597 [2024-07-15 19:49:04.331147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:38.597 [2024-07-15 19:49:04.342627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f0350 00:20:38.597 [2024-07-15 19:49:04.344039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.597 [2024-07-15 19:49:04.344074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:38.597 [2024-07-15 19:49:04.353970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f31b8 00:20:38.597 [2024-07-15 19:49:04.355402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.597 [2024-07-15 19:49:04.355434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:38.597 [2024-07-15 19:49:04.366477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fac10 00:20:38.597 [2024-07-15 19:49:04.368029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.597 [2024-07-15 19:49:04.368063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:38.597 [2024-07-15 19:49:04.377561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f2948 00:20:38.855 [2024-07-15 19:49:04.378750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.855 [2024-07-15 19:49:04.378786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:38.855 [2024-07-15 19:49:04.389413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f7100 00:20:38.855 [2024-07-15 19:49:04.390660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.855 [2024-07-15 19:49:04.390696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:38.855 [2024-07-15 19:49:04.401114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f31b8 00:20:38.856 [2024-07-15 19:49:04.402402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.402436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.415426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e3060 00:20:38.856 [2024-07-15 19:49:04.417353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.417385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.423894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e5ec8 00:20:38.856 [2024-07-15 19:49:04.424867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.424901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.438359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e1710 00:20:38.856 [2024-07-15 19:49:04.439829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.439863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.449421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe720 00:20:38.856 [2024-07-15 19:49:04.450742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.450777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.459398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e5ec8 00:20:38.856 [2024-07-15 19:49:04.460228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.460261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.473777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ecc78 00:20:38.856 [2024-07-15 19:49:04.475293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.475326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.485787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:38.856 [2024-07-15 19:49:04.486797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.486836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.497760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e88f8 00:20:38.856 [2024-07-15 19:49:04.499089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.499123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.511089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f46d0 00:20:38.856 [2024-07-15 19:49:04.512890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.522431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190eaef0 00:20:38.856 [2024-07-15 19:49:04.524069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.534103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190eb760 00:20:38.856 [2024-07-15 19:49:04.535733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.535766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.545216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe2e8 00:20:38.856 [2024-07-15 19:49:04.546496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.546532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.556821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e2c28 00:20:38.856 [2024-07-15 19:49:04.558192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.558229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.567931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190dece0 00:20:38.856 [2024-07-15 19:49:04.568863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.568897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.582949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190dece0 00:20:38.856 [2024-07-15 19:49:04.584946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.584979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.591482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e4de8 00:20:38.856 [2024-07-15 19:49:04.592358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.592393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.606504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f6cc8 00:20:38.856 [2024-07-15 19:49:04.608358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.608392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.618078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ecc78 00:20:38.856 [2024-07-15 19:49:04.619935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.619968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:38.856 [2024-07-15 19:49:04.626555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f4f40 00:20:38.856 [2024-07-15 19:49:04.627453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.856 [2024-07-15 19:49:04.627486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.115 [2024-07-15 19:49:04.638675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190de8a8 00:20:39.115 [2024-07-15 19:49:04.639570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.115 [2024-07-15 19:49:04.639606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.115 [2024-07-15 19:49:04.652187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f9b30 00:20:39.115 [2024-07-15 19:49:04.653555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.115 [2024-07-15 19:49:04.653589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:39.115 [2024-07-15 19:49:04.664114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190de8a8 00:20:39.115 [2024-07-15 19:49:04.665019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.115 [2024-07-15 19:49:04.665055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.115 [2024-07-15 19:49:04.675728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fd208 00:20:39.115 [2024-07-15 19:49:04.676920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.676954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.687429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e0a68 00:20:39.116 [2024-07-15 19:49:04.688519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.688553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.698748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f92c0 00:20:39.116 [2024-07-15 19:49:04.699671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.699706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.712978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f0ff8 00:20:39.116 [2024-07-15 19:49:04.714756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.714790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.725119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ec840 00:20:39.116 [2024-07-15 19:49:04.726878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.726912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.736495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f2948 00:20:39.116 [2024-07-15 19:49:04.738109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.738143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.747277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ff3c8 00:20:39.116 [2024-07-15 19:49:04.748474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.748508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.758949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f46d0 00:20:39.116 [2024-07-15 19:49:04.760219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.760252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.773244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190edd58 00:20:39.116 [2024-07-15 19:49:04.775179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.775214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.781710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e0630 00:20:39.116 [2024-07-15 19:49:04.782543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.782576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.796724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ff3c8 00:20:39.116 [2024-07-15 19:49:04.798537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.798569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.804918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f96f8 00:20:39.116 [2024-07-15 19:49:04.805709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.805741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.817185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f7da8 00:20:39.116 [2024-07-15 19:49:04.817993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.818028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.831841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190df988 00:20:39.116 [2024-07-15 19:49:04.833294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.833330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.843787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f7538 00:20:39.116 [2024-07-15 19:49:04.846077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.846110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.855056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe2e8 00:20:39.116 [2024-07-15 19:49:04.856853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.856888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.865517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e27f0 00:20:39.116 [2024-07-15 19:49:04.866344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.866379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.878039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e4140 00:20:39.116 [2024-07-15 19:49:04.879015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.879049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.116 [2024-07-15 19:49:04.890562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ed920 00:20:39.116 [2024-07-15 19:49:04.891683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.116 [2024-07-15 19:49:04.891716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.904851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e12d8 00:20:39.375 [2024-07-15 19:49:04.906659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.906692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.913329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fda78 00:20:39.375 [2024-07-15 19:49:04.914155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.914197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.927658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fc560 00:20:39.375 [2024-07-15 19:49:04.928984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.929021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.938965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f6890 00:20:39.375 [2024-07-15 19:49:04.940127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.940176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.950350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ed0b0 00:20:39.375 [2024-07-15 19:49:04.951379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.951414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.962253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f5378 00:20:39.375 [2024-07-15 19:49:04.962907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.962942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.975920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190de038 00:20:39.375 [2024-07-15 19:49:04.977410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.977445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.987287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f7538 00:20:39.375 [2024-07-15 19:49:04.988633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.988668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:04.998289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fa7d8 00:20:39.375 [2024-07-15 19:49:04.999423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:04.999458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:05.009912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190eb760 00:20:39.375 [2024-07-15 19:49:05.011097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:05.011131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:05.024187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fc128 00:20:39.375 [2024-07-15 19:49:05.026022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:05.026055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:05.032600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e95a0 00:20:39.375 [2024-07-15 19:49:05.033510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:05.033542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:05.045084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e9168 00:20:39.375 [2024-07-15 19:49:05.046148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:05.046198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:05.057039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fdeb0 00:20:39.375 [2024-07-15 19:49:05.057623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:05.057649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:39.375 [2024-07-15 19:49:05.069662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190de470 00:20:39.375 [2024-07-15 19:49:05.070421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.375 [2024-07-15 19:49:05.070459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:39.376 [2024-07-15 19:49:05.081111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e6300 00:20:39.376 [2024-07-15 19:49:05.081741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.376 [2024-07-15 19:49:05.081775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:39.376 [2024-07-15 19:49:05.095350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f9b30 00:20:39.376 [2024-07-15 19:49:05.097058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.376 [2024-07-15 19:49:05.097091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:39.376 [2024-07-15 19:49:05.108073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190de038 00:20:39.376 [2024-07-15 19:49:05.109949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.376 [2024-07-15 19:49:05.109992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.376 [2024-07-15 19:49:05.116567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f8e88 00:20:39.376 [2024-07-15 19:49:05.117477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.376 [2024-07-15 19:49:05.117511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.376 [2024-07-15 19:49:05.128737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ed4e8 00:20:39.376 [2024-07-15 19:49:05.129634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.376 [2024-07-15 19:49:05.129668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.376 [2024-07-15 19:49:05.140152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190eaab8 00:20:39.376 [2024-07-15 19:49:05.140918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.376 [2024-07-15 19:49:05.140951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.376 [2024-07-15 19:49:05.154174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f8a50 00:20:39.376 [2024-07-15 19:49:05.155533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.376 [2024-07-15 19:49:05.155567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.165191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f31b8 00:20:39.634 [2024-07-15 19:49:05.166246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.166281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.176798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e84c0 00:20:39.634 [2024-07-15 19:49:05.177878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.177911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.188167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fda78 00:20:39.634 [2024-07-15 19:49:05.188783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.188818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.200072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e2c28 00:20:39.634 [2024-07-15 19:49:05.201009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.201049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.211045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ee190 00:20:39.634 [2024-07-15 19:49:05.212048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.212097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.224474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:39.634 [2024-07-15 19:49:05.226075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.226108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.235011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe2e8 00:20:39.634 [2024-07-15 19:49:05.236329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.236355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.246182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e9e10 00:20:39.634 [2024-07-15 19:49:05.247483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.247516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.257826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f7da8 00:20:39.634 [2024-07-15 19:49:05.259217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.259254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.267387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ddc00 00:20:39.634 [2024-07-15 19:49:05.268187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.268228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.281596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fa3a0 00:20:39.634 [2024-07-15 19:49:05.283066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.283099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.292556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe720 00:20:39.634 [2024-07-15 19:49:05.293601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.293647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.304543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190eaab8 00:20:39.634 [2024-07-15 19:49:05.305860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.305894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.317096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e5658 00:20:39.634 [2024-07-15 19:49:05.318888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.318934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.326295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e3d08 00:20:39.634 [2024-07-15 19:49:05.327443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.327475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.337864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f20d8 00:20:39.634 [2024-07-15 19:49:05.339019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.339052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.348755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f4b08 00:20:39.634 [2024-07-15 19:49:05.349430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.349463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.360627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fdeb0 00:20:39.634 [2024-07-15 19:49:05.361612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.361644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.371383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190edd58 00:20:39.634 [2024-07-15 19:49:05.372416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.634 [2024-07-15 19:49:05.372462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.634 [2024-07-15 19:49:05.382561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190dece0 00:20:39.635 [2024-07-15 19:49:05.383832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.635 [2024-07-15 19:49:05.383864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.635 [2024-07-15 19:49:05.394765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:39.635 [2024-07-15 19:49:05.395911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.635 [2024-07-15 19:49:05.395944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.635 [2024-07-15 19:49:05.406592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e9e10 00:20:39.635 [2024-07-15 19:49:05.407751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.635 [2024-07-15 19:49:05.407787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.419482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190df988 00:20:39.893 [2024-07-15 19:49:05.420821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.420858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.431837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e8088 00:20:39.893 [2024-07-15 19:49:05.433348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.433382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.442625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190de038 00:20:39.893 [2024-07-15 19:49:05.443668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.443734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.454653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fdeb0 00:20:39.893 [2024-07-15 19:49:05.455959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.455996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.465176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ee190 00:20:39.893 [2024-07-15 19:49:05.466393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.466429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.478358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e8d30 00:20:39.893 [2024-07-15 19:49:05.480346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.480385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.486837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190dfdc0 00:20:39.893 [2024-07-15 19:49:05.487692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.487731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.500608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f92c0 00:20:39.893 [2024-07-15 19:49:05.502407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.502444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.508145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe720 00:20:39.893 [2024-07-15 19:49:05.508982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.509017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.522517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ea680 00:20:39.893 [2024-07-15 19:49:05.523888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.523921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.534336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fa3a0 00:20:39.893 [2024-07-15 19:49:05.536492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.536528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.543997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ed0b0 00:20:39.893 [2024-07-15 19:49:05.545191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.545227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.555645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f4f40 00:20:39.893 [2024-07-15 19:49:05.556860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.556894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.567814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e0a68 00:20:39.893 [2024-07-15 19:49:05.568489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.568526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.581330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190edd58 00:20:39.893 [2024-07-15 19:49:05.582814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.582853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.591931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e95a0 00:20:39.893 [2024-07-15 19:49:05.593210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.593270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.602679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e3060 00:20:39.893 [2024-07-15 19:49:05.604024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.604060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.614799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e01f8 00:20:39.893 [2024-07-15 19:49:05.616152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.616199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.625526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e38d0 00:20:39.893 [2024-07-15 19:49:05.626882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.626916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.636714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e12d8 00:20:39.893 [2024-07-15 19:49:05.638061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.638100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.648005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fb480 00:20:39.893 [2024-07-15 19:49:05.649230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.649267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.658701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f0788 00:20:39.893 [2024-07-15 19:49:05.659900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.659933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.893 [2024-07-15 19:49:05.669994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190efae0 00:20:39.893 [2024-07-15 19:49:05.671205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.893 [2024-07-15 19:49:05.671240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.681288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe720 00:20:40.152 [2024-07-15 19:49:05.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.682394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.691860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fda78 00:20:40.152 [2024-07-15 19:49:05.692915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.692963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.703709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f6020 00:20:40.152 [2024-07-15 19:49:05.705006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.705040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.716931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe720 00:20:40.152 [2024-07-15 19:49:05.718812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.718849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.725353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f92c0 00:20:40.152 [2024-07-15 19:49:05.726271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.726307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.739145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e3498 00:20:40.152 [2024-07-15 19:49:05.740727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.740761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.749386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190de8a8 00:20:40.152 [2024-07-15 19:49:05.751167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.751205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.762669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ebb98 00:20:40.152 [2024-07-15 19:49:05.764099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.764133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.771508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fc998 00:20:40.152 [2024-07-15 19:49:05.772319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.772355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.784960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e2c28 00:20:40.152 [2024-07-15 19:49:05.786443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.786476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.796099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ebfd0 00:20:40.152 [2024-07-15 19:49:05.797140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.797187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.807693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ee5c8 00:20:40.152 [2024-07-15 19:49:05.808862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.808896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.820067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fac10 00:20:40.152 [2024-07-15 19:49:05.821254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.821292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.831441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e5a90 00:20:40.152 [2024-07-15 19:49:05.832396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.832433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.844832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f7538 00:20:40.152 [2024-07-15 19:49:05.846478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.846511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.854548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e1b48 00:20:40.152 [2024-07-15 19:49:05.856305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.856337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.864171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e4de8 00:20:40.152 [2024-07-15 19:49:05.864971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.865008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.878436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe720 00:20:40.152 [2024-07-15 19:49:05.879895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.879929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.889174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f1ca0 00:20:40.152 [2024-07-15 19:49:05.890165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.890231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.900583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fdeb0 00:20:40.152 [2024-07-15 19:49:05.901451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.901487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.913754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e5658 00:20:40.152 [2024-07-15 19:49:05.915772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.152 [2024-07-15 19:49:05.915803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:40.152 [2024-07-15 19:49:05.921946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f1868 00:20:40.153 [2024-07-15 19:49:05.922945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.153 [2024-07-15 19:49:05.922981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:05.935531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ecc78 00:20:40.411 [2024-07-15 19:49:05.937109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:05.937143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:05.947345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e88f8 00:20:40.411 [2024-07-15 19:49:05.949126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:05.949176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:05.955730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190eb328 00:20:40.411 [2024-07-15 19:49:05.956584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:05.956621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:05.969343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e1710 00:20:40.411 [2024-07-15 19:49:05.970691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:05.970731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:05.980203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ebfd0 00:20:40.411 [2024-07-15 19:49:05.981372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:05.981408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:05.992893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e4578 00:20:40.411 [2024-07-15 19:49:05.994255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:05.994291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.005795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f0ff8 00:20:40.411 [2024-07-15 19:49:06.007930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.007962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.014417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e5658 00:20:40.411 [2024-07-15 19:49:06.015263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.015301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.027893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f4b08 00:20:40.411 [2024-07-15 19:49:06.028984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.029019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.039750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ed4e8 00:20:40.411 [2024-07-15 19:49:06.041315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.041350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.052429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190dfdc0 00:20:40.411 [2024-07-15 19:49:06.054111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.054151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.064697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e2c28 00:20:40.411 [2024-07-15 19:49:06.066407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.066443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.075958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190f81e0 00:20:40.411 [2024-07-15 19:49:06.077502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.077539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.087440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fe2e8 00:20:40.411 [2024-07-15 19:49:06.088800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.411 [2024-07-15 19:49:06.088839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:40.411 [2024-07-15 19:49:06.100185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e3060 00:20:40.412 [2024-07-15 19:49:06.101745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.101781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.111532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190eee38 00:20:40.412 [2024-07-15 19:49:06.112882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.112920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.122917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fc128 00:20:40.412 [2024-07-15 19:49:06.124132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.124187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.134510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fcdd0 00:20:40.412 [2024-07-15 19:49:06.135905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.135938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.148847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e1b48 00:20:40.412 [2024-07-15 19:49:06.150866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.150904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.157148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e1710 00:20:40.412 [2024-07-15 19:49:06.158231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.158268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.170771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190e4140 00:20:40.412 [2024-07-15 19:49:06.172489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.172528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.179188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ff3c8 00:20:40.412 [2024-07-15 19:49:06.179914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.412 [2024-07-15 19:49:06.179950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:40.412 [2024-07-15 19:49:06.193065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fdeb0 00:20:40.670 [2024-07-15 19:49:06.194534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.670 [2024-07-15 19:49:06.194570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:40.670 [2024-07-15 19:49:06.203690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ebb98 00:20:40.670 [2024-07-15 19:49:06.204741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.670 [2024-07-15 19:49:06.204779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:40.670 [2024-07-15 19:49:06.216180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190df118 00:20:40.670 [2024-07-15 19:49:06.217307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.670 [2024-07-15 19:49:06.217345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:40.670 [2024-07-15 19:49:06.228259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190ebfd0 00:20:40.670 [2024-07-15 19:49:06.229701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.670 [2024-07-15 19:49:06.229735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:40.670 [2024-07-15 19:49:06.239062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fc560 00:20:40.670 [2024-07-15 19:49:06.240186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.670 [2024-07-15 19:49:06.240222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:40.670 00:20:40.670 Latency(us) 00:20:40.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.670 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:40.670 nvme0n1 : 2.00 21720.18 84.84 0.00 0.00 5886.78 2383.13 16205.27 00:20:40.670 =================================================================================================================== 00:20:40.670 Total : 21720.18 84.84 0.00 0.00 5886.78 2383.13 16205.27 00:20:40.670 0 00:20:40.670 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:40.670 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:40.670 | .driver_specific 00:20:40.670 | .nvme_error 00:20:40.670 | .status_code 00:20:40.670 | .command_transient_transport_error' 00:20:40.670 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:40.670 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94020 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94020 ']' 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94020 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94020 00:20:40.927 killing process with pid 94020 00:20:40.927 Received shutdown signal, test time was about 2.000000 seconds 00:20:40.927 00:20:40.927 Latency(us) 00:20:40.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.927 =================================================================================================================== 00:20:40.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94020' 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94020 00:20:40.927 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94020 00:20:41.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94106 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94106 /var/tmp/bperf.sock 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 94106 ']' 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.185 19:49:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.185 [2024-07-15 19:49:06.860198] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:41.185 [2024-07-15 19:49:06.860488] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94106 ] 00:20:41.185 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:41.185 Zero copy mechanism will not be used. 00:20:41.442 [2024-07-15 19:49:06.995943] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.442 [2024-07-15 19:49:07.097709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.374 19:49:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.374 19:49:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:42.374 19:49:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:42.374 19:49:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:42.374 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:42.374 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.374 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.374 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.374 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:42.374 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:42.631 nvme0n1 00:20:42.631 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:42.631 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.631 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.631 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.631 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:42.631 19:49:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:42.889 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:42.889 Zero copy mechanism will not be used. 00:20:42.889 Running I/O for 2 seconds... 00:20:42.889 [2024-07-15 19:49:08.506628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.507438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.507789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.513412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.513937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.514065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.519632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.520034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.520146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.525706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.526129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.526259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.531894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.532313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.532443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.537915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.538334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.538462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.544214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.544609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.544736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.550262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.550640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.550746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.556249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.556637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.556745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.562238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.562616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.562741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.568175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.568587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.568709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.574216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.574631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.574740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.580220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.580615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.580723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.586201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.586593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.586719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.592219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.592622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.592748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.598141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.598590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.598728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.604303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.604702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.604833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.610419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.610812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.610921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.616348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.616734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.616841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.622468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.622887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.622994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.628443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.628850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.628957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.634479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.634895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.635022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.640512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.640932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.641057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.646628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.647031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.647141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.652675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.653079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.653229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.658752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.659194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.659310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.889 [2024-07-15 19:49:08.664820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:42.889 [2024-07-15 19:49:08.665159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.889 [2024-07-15 19:49:08.665199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.148 [2024-07-15 19:49:08.670628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.148 [2024-07-15 19:49:08.670966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.148 [2024-07-15 19:49:08.670994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.148 [2024-07-15 19:49:08.676326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.148 [2024-07-15 19:49:08.676638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.148 [2024-07-15 19:49:08.676666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.682047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.682385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.682413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.687819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.688133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.688169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.693644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.693953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.694009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.699310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.699620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.699646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.704927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.705282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.705310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.710566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.710873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.710900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.716241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.716548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.716575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.721857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.722220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.722249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.727610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.727957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.727985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.733423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.733754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.733781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.739210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.739523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.739550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.744959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.745295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.745322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.750639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.750955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.750984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.756402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.756717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.756745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.762085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.762434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.762462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.767752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.768064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.768091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.773420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.773741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.773767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.779077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.779396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.779422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.784660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.784966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.784992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.790231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.790536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.790563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.795800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.796104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.796131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.801389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.801700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.801727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.807099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.807428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.807455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.812651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.812954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.812981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.818369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.818660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.818687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.823994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.824316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.824343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.829694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.830012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.830040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.835461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.835760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.149 [2024-07-15 19:49:08.835787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.149 [2024-07-15 19:49:08.841104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.149 [2024-07-15 19:49:08.841475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.841503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.846869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.847177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.847214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.852504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.852813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.852842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.858212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.858551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.858579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.863791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.864120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.864148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.869554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.869861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.869888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.875290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.875594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.875620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.880781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.881086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.881114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.886588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.886887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.886914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.892251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.892587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.892614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.898022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.898383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.898410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.903700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.904025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.904052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.909410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.909732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.909759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.915311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.915638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.915665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.920949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.921284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.921311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.150 [2024-07-15 19:49:08.926797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.150 [2024-07-15 19:49:08.927102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.150 [2024-07-15 19:49:08.927130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.932609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.932942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.932970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.938527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.938816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.938844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.944860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.945205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.945243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.950575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.950881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.950909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.956402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.956719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.956748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.962259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.962580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.962607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.968184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.968532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.968559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.973844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.974204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.974233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.979967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.980285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.980314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.985631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.985971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.985999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.991454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.991761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.991803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:08.997295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:08.997631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:08.997658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.003062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:09.003406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:09.003434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.008803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:09.009115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:09.009143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.014551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:09.014851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:09.014879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.020421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:09.020751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:09.020778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.026247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:09.026579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:09.026608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.031908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:09.032232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:09.032260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.037708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.410 [2024-07-15 19:49:09.038041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.410 [2024-07-15 19:49:09.038069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.410 [2024-07-15 19:49:09.043344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.043643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.043670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.049056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.049390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.049418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.054685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.055000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.055027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.060369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.060707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.060734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.066164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.066511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.066538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.071898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.072241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.072268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.077639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.077996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.078024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.083420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.083731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.083759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.089220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.089544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.089586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.094821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.095134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.095169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.100458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.100772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.100800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.106165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.106479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.106506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.111733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.112044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.112073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.117362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.117681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.117708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.123030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.123357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.123384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.128749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.129069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.129097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.134508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.134823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.134850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.140096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.140419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.140447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.145744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.146085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.146113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.151451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.151771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.151799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.157238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.157544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.157572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.162968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.163301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.163329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.168704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.169029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.169057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.174542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.174855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.174882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.180147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.180500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.411 [2024-07-15 19:49:09.185757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.411 [2024-07-15 19:49:09.186102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.411 [2024-07-15 19:49:09.186130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.191458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.191767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.191795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.197025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.197373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.197402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.202738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.203053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.203080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.208324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.208623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.208650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.213853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.214215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.214243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.219577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.219900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.219929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.225317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.225653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.225680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.230973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.231291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.231317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.236605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.236896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.236923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.242154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.242476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.242501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.247729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.248032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.248058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.253439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.253769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.253796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.259165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.259483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.259510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.264865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.265192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.265231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.270681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.271013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.271040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.276417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.276733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.276760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.282115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.282449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.282476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.287758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.288072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.288099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.293402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.293726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.293753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.299017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.299335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.299362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.304708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.304996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.305022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.310250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.671 [2024-07-15 19:49:09.310541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.671 [2024-07-15 19:49:09.310567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.671 [2024-07-15 19:49:09.315779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.316083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.316110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.321409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.321746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.321772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.327125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.327445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.327471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.332642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.332945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.332971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.338260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.338567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.338593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.343823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.344126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.344153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.349395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.349720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.355056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.355374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.355400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.360572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.360877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.360904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.366249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.366574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.366601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.371724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.372046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.372074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.377420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.377746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.377773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.383097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.383410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.383438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.388928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.389272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.389300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.394715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.395026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.395053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.400472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.400787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.400814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.406436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.406740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.406769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.412168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.412481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.412508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.417912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.418279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.418323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.423867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.424181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.424234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.429848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.430207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.430236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.435692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.436007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.436034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.441492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.441822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.441850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.672 [2024-07-15 19:49:09.447332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.672 [2024-07-15 19:49:09.447673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.672 [2024-07-15 19:49:09.447701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.453043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.453390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.453418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.458805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.459129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.459168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.464678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.465000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.465029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.470466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.470778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.470806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.476336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.476629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.476656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.482023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.482344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.482372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.487800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.488092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.488119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.493623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.493914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.493942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.499397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.499713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.499757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.505298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.505616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.505642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.511131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.511453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.511476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.516998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.517340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.517368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.522740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.523054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.523081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.528610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.528920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.528947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.932 [2024-07-15 19:49:09.534349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.932 [2024-07-15 19:49:09.534643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.932 [2024-07-15 19:49:09.534671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.540152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.540491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.540520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.545793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.546138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.546174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.551519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.551826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.551854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.557340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.557674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.557701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.563010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.563357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.563385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.568729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.569042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.569070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.574492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.574806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.574834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.580133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.580471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.580499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.585796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.586143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.586179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.591542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.591834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.591863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.597301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.597592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.597620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.602958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.603294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.603321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.608694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.609014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.609043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.614530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.614845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.614873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.620209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.620523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.620550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.625874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.626221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.626249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.631747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.632040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.632067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.637468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.637775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.637803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.643214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.643521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.643548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.648891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.649217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.649244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.654570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.654869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.654897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.660360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.660677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.660704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.666135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.666466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.666493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.671889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.672245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.677548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.677864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.933 [2024-07-15 19:49:09.677892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.933 [2024-07-15 19:49:09.683319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.933 [2024-07-15 19:49:09.683658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.934 [2024-07-15 19:49:09.683685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.934 [2024-07-15 19:49:09.689080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.934 [2024-07-15 19:49:09.689400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.934 [2024-07-15 19:49:09.689429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.934 [2024-07-15 19:49:09.694733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.934 [2024-07-15 19:49:09.695056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.934 [2024-07-15 19:49:09.695085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.934 [2024-07-15 19:49:09.700560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.934 [2024-07-15 19:49:09.700885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.934 [2024-07-15 19:49:09.700913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.934 [2024-07-15 19:49:09.706367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.934 [2024-07-15 19:49:09.706675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.934 [2024-07-15 19:49:09.706703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.934 [2024-07-15 19:49:09.712039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:43.934 [2024-07-15 19:49:09.712343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.934 [2024-07-15 19:49:09.712372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.193 [2024-07-15 19:49:09.717776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.193 [2024-07-15 19:49:09.718101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.193 [2024-07-15 19:49:09.718129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.193 [2024-07-15 19:49:09.723616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.193 [2024-07-15 19:49:09.723937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.193 [2024-07-15 19:49:09.723965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.193 [2024-07-15 19:49:09.729425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.193 [2024-07-15 19:49:09.729716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.193 [2024-07-15 19:49:09.729744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.193 [2024-07-15 19:49:09.735143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.193 [2024-07-15 19:49:09.735465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.193 [2024-07-15 19:49:09.735494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.193 [2024-07-15 19:49:09.740897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.193 [2024-07-15 19:49:09.741231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.193 [2024-07-15 19:49:09.741259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.193 [2024-07-15 19:49:09.746590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.193 [2024-07-15 19:49:09.746882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.193 [2024-07-15 19:49:09.746909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.752287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.752593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.752621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.757912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.758244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.758272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.763598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.763890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.763918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.769311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.769619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.769647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.775033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.775368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.775396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.780783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.781072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.781100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.786495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.786821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.786849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.792206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.792512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.792539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.797909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.798240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.798268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.803661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.803965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.803993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.809353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.809661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.809688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.815029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.815347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.815374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.820729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.821019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.821048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.826462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.826771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.826799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.832179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.832493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.832520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.837882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.838210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.838238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.843623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.843946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.843975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.849367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.849688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.849716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.855074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.855410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.855438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.860761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.861086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.861114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.866482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.866839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.866867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.872365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.872683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.872710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.878153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.878467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.878495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.883841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.884147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.884215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.889675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.890003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.890031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.895578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.895893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.895920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.901302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.901618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.901645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.907003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.907340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.194 [2024-07-15 19:49:09.907367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.194 [2024-07-15 19:49:09.912806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.194 [2024-07-15 19:49:09.913120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.913148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.918596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.918938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.918965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.924248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.924549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.924591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.929893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.930243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.930270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.935554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.935847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.935875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.941199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.941499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.941526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.946825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.947129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.947167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.952512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.952824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.952851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.958165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.958486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.958512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.963776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.964074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.964102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.195 [2024-07-15 19:49:09.969405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.195 [2024-07-15 19:49:09.969745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.195 [2024-07-15 19:49:09.969771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:09.975124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:09.975453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:09.975480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:09.980741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:09.981043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:09.981070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:09.986316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:09.986626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:09.986652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:09.992001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:09.992320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:09.992347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:09.997656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:09.997988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:09.998015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.003940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.004248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.004277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.009707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.010041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.010069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.015458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.015766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.015794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.021237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.021544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.021586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.026997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.027322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.027351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.032926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.033292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.033320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.039620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.039956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.040001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.045371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.045700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.045727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.051304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.051614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.051642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.056902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.057241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.057268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.062646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.062955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.062982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.068336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.068662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.068689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.074132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.074453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.074479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.079912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.080212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.080249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.085719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.086052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.086080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.091370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.091689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.091716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.096979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.097310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.097337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.102688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.102985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.103012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.108236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.108533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.108559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.457 [2024-07-15 19:49:10.113800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.457 [2024-07-15 19:49:10.114134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.457 [2024-07-15 19:49:10.114172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.119450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.119782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.119809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.125012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.125330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.125357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.130671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.130968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.130995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.136233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.136516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.136542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.141846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.142185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.142212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.147492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.147791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.147817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.153040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.153370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.153397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.158566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.158863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.158890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.164224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.164544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.164570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.169790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.170126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.170152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.175415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.175711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.175737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.181041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.181349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.181375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.186750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.187056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.187084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.192307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.192605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.192631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.197775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.198087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.198113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.203372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.203669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.203694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.209050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.209387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.209414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.214912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.215271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.215298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.220757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.221048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.221074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.226408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.226775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.226803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.458 [2024-07-15 19:49:10.232166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.458 [2024-07-15 19:49:10.232476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.458 [2024-07-15 19:49:10.232502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.237676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.237995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.238022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.243270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.243564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.243590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.248895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.249179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.249233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.254761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.255074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.255107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.260650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.260972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.266427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.266735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.266761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.272112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.272479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.272506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.277681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.278017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.278045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.283367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.283665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.283690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.289031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.289363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.289390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.294727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.295025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.295051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.300277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.300595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.719 [2024-07-15 19:49:10.300621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.719 [2024-07-15 19:49:10.305760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.719 [2024-07-15 19:49:10.306096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.306128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.311417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.311717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.311744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.316948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.317246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.317273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.322574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.322890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.322918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.328134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.328468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.328494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.333671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.334008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.334035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.339263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.339570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.339596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.344847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.345144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.345179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.350450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.350747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.350773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.356049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.356379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.356405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.361617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.361912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.361938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.367145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.367442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.367468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.372678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.372977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.373003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.378332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.378630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.378655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.383863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.384198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.384234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.389411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.389736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.389762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.395062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.395371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.395397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.400680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.400978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.401004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.406244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.406558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.406584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.411721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.412017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.412043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.417333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.417655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.417682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.422944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.423239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.423266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.428471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.428780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.428806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.434107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.434477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.434504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.439712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.440016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.440043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.445273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.445569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.445595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.450913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.451203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.451241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.456571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.456868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.456894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.462232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.720 [2024-07-15 19:49:10.462534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.720 [2024-07-15 19:49:10.462559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.720 [2024-07-15 19:49:10.467748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.721 [2024-07-15 19:49:10.468045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.721 [2024-07-15 19:49:10.468072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.721 [2024-07-15 19:49:10.473219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.721 [2024-07-15 19:49:10.473518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.721 [2024-07-15 19:49:10.473544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.721 [2024-07-15 19:49:10.478685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.721 [2024-07-15 19:49:10.478971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.721 [2024-07-15 19:49:10.478997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.721 [2024-07-15 19:49:10.484234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.721 [2024-07-15 19:49:10.484555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.721 [2024-07-15 19:49:10.484581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.721 [2024-07-15 19:49:10.489868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.721 [2024-07-15 19:49:10.490222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.721 [2024-07-15 19:49:10.490264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.721 [2024-07-15 19:49:10.495461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.721 [2024-07-15 19:49:10.495842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.721 [2024-07-15 19:49:10.495870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.979 [2024-07-15 19:49:10.501241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf77b90) with pdu=0x2000190fef90 00:20:44.979 [2024-07-15 19:49:10.501349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.979 [2024-07-15 19:49:10.501372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.979 00:20:44.979 Latency(us) 00:20:44.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.979 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:44.979 nvme0n1 : 2.00 5402.10 675.26 0.00 0.00 2955.20 2308.65 6851.49 00:20:44.979 =================================================================================================================== 00:20:44.979 Total : 5402.10 675.26 0.00 0.00 2955.20 2308.65 6851.49 00:20:44.979 0 00:20:44.979 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:44.979 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:44.979 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:44.979 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:44.979 | .driver_specific 00:20:44.979 | .nvme_error 00:20:44.979 | .status_code 00:20:44.979 | .command_transient_transport_error' 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 349 > 0 )) 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94106 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 94106 ']' 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 94106 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94106 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:45.237 killing process with pid 94106 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94106' 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 94106 00:20:45.237 Received shutdown signal, test time was about 2.000000 seconds 00:20:45.237 00:20:45.237 Latency(us) 00:20:45.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.237 =================================================================================================================== 00:20:45.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.237 19:49:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 94106 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93795 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93795 ']' 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93795 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93795 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:45.495 killing process with pid 93795 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93795' 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93795 00:20:45.495 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93795 00:20:45.753 00:20:45.753 real 0m18.481s 00:20:45.753 user 0m35.175s 00:20:45.753 sys 0m4.714s 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:45.753 ************************************ 00:20:45.753 END TEST nvmf_digest_error 00:20:45.753 ************************************ 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.753 rmmod nvme_tcp 00:20:45.753 rmmod nvme_fabrics 00:20:45.753 rmmod nvme_keyring 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93795 ']' 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93795 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93795 ']' 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93795 00:20:45.753 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93795) - No such process 00:20:45.753 Process with pid 93795 is not found 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93795 is not found' 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:45.753 00:20:45.753 real 0m38.175s 00:20:45.753 user 1m11.452s 00:20:45.753 sys 0m9.795s 00:20:45.753 ************************************ 00:20:45.753 END TEST nvmf_digest 00:20:45.753 ************************************ 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.753 19:49:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:45.753 19:49:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:45.753 19:49:11 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:20:45.753 19:49:11 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:20:45.753 19:49:11 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:45.753 19:49:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:45.753 19:49:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.753 19:49:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.012 ************************************ 00:20:46.012 START TEST nvmf_mdns_discovery 00:20:46.012 ************************************ 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:46.012 * Looking for test storage... 00:20:46.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.012 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:46.013 Cannot find device "nvmf_tgt_br" 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.013 Cannot find device "nvmf_tgt_br2" 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:46.013 Cannot find device "nvmf_tgt_br" 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:46.013 Cannot find device "nvmf_tgt_br2" 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:46.013 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:46.270 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.270 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:46.270 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.270 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:46.270 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:46.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:46.271 00:20:46.271 --- 10.0.0.2 ping statistics --- 00:20:46.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.271 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:46.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:20:46.271 00:20:46.271 --- 10.0.0.3 ping statistics --- 00:20:46.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.271 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:46.271 00:20:46.271 --- 10.0.0.1 ping statistics --- 00:20:46.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.271 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.271 19:49:11 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94397 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94397 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94397 ']' 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.271 19:49:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.529 [2024-07-15 19:49:12.069986] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:46.529 [2024-07-15 19:49:12.070084] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.529 [2024-07-15 19:49:12.210469] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.787 [2024-07-15 19:49:12.336743] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.787 [2024-07-15 19:49:12.336802] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.787 [2024-07-15 19:49:12.336822] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.787 [2024-07-15 19:49:12.336832] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.787 [2024-07-15 19:49:12.336842] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.787 [2024-07-15 19:49:12.336876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.352 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.352 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:47.352 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.352 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:47.352 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 [2024-07-15 19:49:13.254403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 [2024-07-15 19:49:13.266545] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 null0 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 null1 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 null2 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 null3 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94447 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94447 /tmp/host.sock 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94447 ']' 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:47.612 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.612 19:49:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 [2024-07-15 19:49:13.365273] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:20:47.612 [2024-07-15 19:49:13.365578] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94447 ] 00:20:47.871 [2024-07-15 19:49:13.502451] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.871 [2024-07-15 19:49:13.624717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94476 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:48.805 19:49:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:48.805 Process 981 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:48.805 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:48.805 Successfully dropped root privileges. 00:20:48.805 avahi-daemon 0.8 starting up. 00:20:48.805 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:49.739 Successfully called chroot(). 00:20:49.740 Successfully dropped remaining capabilities. 00:20:49.740 No service file found in /etc/avahi/services. 00:20:49.740 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:49.740 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:49.740 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:49.740 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:49.740 Network interface enumeration completed. 00:20:49.740 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:20:49.740 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:49.740 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:49.740 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:49.740 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 4125995805. 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.740 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.997 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:20:49.997 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:49.997 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.997 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.997 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.998 [2024-07-15 19:49:15.745109] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:49.998 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 [2024-07-15 19:49:15.815577] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 [2024-07-15 19:49:15.855492] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 [2024-07-15 19:49:15.863473] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.256 19:49:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:51.191 [2024-07-15 19:49:16.645098] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:51.758 [2024-07-15 19:49:17.245108] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:51.758 [2024-07-15 19:49:17.245160] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:51.758 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:51.758 cookie is 0 00:20:51.758 is_local: 1 00:20:51.758 our_own: 0 00:20:51.758 wide_area: 0 00:20:51.758 multicast: 1 00:20:51.758 cached: 1 00:20:51.758 [2024-07-15 19:49:17.345098] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:51.758 [2024-07-15 19:49:17.345121] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:51.758 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:51.758 cookie is 0 00:20:51.758 is_local: 1 00:20:51.758 our_own: 0 00:20:51.758 wide_area: 0 00:20:51.758 multicast: 1 00:20:51.758 cached: 1 00:20:51.758 [2024-07-15 19:49:17.345147] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:51.758 [2024-07-15 19:49:17.445101] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:51.758 [2024-07-15 19:49:17.445126] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:51.758 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:51.758 cookie is 0 00:20:51.758 is_local: 1 00:20:51.758 our_own: 0 00:20:51.758 wide_area: 0 00:20:51.758 multicast: 1 00:20:51.758 cached: 1 00:20:52.017 [2024-07-15 19:49:17.545117] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:52.017 [2024-07-15 19:49:17.545143] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:52.017 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:52.017 cookie is 0 00:20:52.017 is_local: 1 00:20:52.017 our_own: 0 00:20:52.017 wide_area: 0 00:20:52.017 multicast: 1 00:20:52.017 cached: 1 00:20:52.017 [2024-07-15 19:49:17.545169] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:52.585 [2024-07-15 19:49:18.251173] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:52.585 [2024-07-15 19:49:18.251239] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:52.585 [2024-07-15 19:49:18.251257] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:52.585 [2024-07-15 19:49:18.337369] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:52.844 [2024-07-15 19:49:18.394556] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:52.844 [2024-07-15 19:49:18.394585] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:52.844 [2024-07-15 19:49:18.450930] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:52.844 [2024-07-15 19:49:18.450971] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:52.844 [2024-07-15 19:49:18.451005] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:52.844 [2024-07-15 19:49:18.537076] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:52.844 [2024-07-15 19:49:18.592984] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:52.844 [2024-07-15 19:49:18.593013] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:20:55.374 19:49:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:55.374 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:20:55.632 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.633 19:49:21 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:20:56.567 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:20:56.567 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:56.567 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.567 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:56.567 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.568 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:56.568 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 [2024-07-15 19:49:22.402694] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:56.826 [2024-07-15 19:49:22.403797] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:56.826 [2024-07-15 19:49:22.403852] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:56.826 [2024-07-15 19:49:22.403887] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:56.826 [2024-07-15 19:49:22.403900] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.826 [2024-07-15 19:49:22.410581] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:56.826 [2024-07-15 19:49:22.411771] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:56.826 [2024-07-15 19:49:22.411857] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.826 19:49:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:20:56.826 [2024-07-15 19:49:22.542873] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:20:56.826 [2024-07-15 19:49:22.543117] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:20:56.826 [2024-07-15 19:49:22.605199] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:56.826 [2024-07-15 19:49:22.605226] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:56.826 [2024-07-15 19:49:22.605248] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:56.826 [2024-07-15 19:49:22.605265] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:56.826 [2024-07-15 19:49:22.605384] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:56.826 [2024-07-15 19:49:22.605392] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:56.826 [2024-07-15 19:49:22.605397] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:56.826 [2024-07-15 19:49:22.605409] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:57.084 [2024-07-15 19:49:22.650973] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:57.084 [2024-07-15 19:49:22.650995] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:57.084 [2024-07-15 19:49:22.651049] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:57.084 [2024-07-15 19:49:22.651057] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:57.648 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:20:57.648 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:57.648 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:57.648 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.648 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.648 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:57.648 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.906 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:57.907 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.167 [2024-07-15 19:49:23.731737] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:58.167 [2024-07-15 19:49:23.731791] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:58.167 [2024-07-15 19:49:23.731825] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:58.167 [2024-07-15 19:49:23.731839] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:58.167 [2024-07-15 19:49:23.732254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.167 [2024-07-15 19:49:23.732294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.167 [2024-07-15 19:49:23.732308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.167 [2024-07-15 19:49:23.732318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.167 [2024-07-15 19:49:23.732328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.167 [2024-07-15 19:49:23.732337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.167 [2024-07-15 19:49:23.732348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.167 [2024-07-15 19:49:23.732357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.167 [2024-07-15 19:49:23.732366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.167 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.167 [2024-07-15 19:49:23.742166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.167 [2024-07-15 19:49:23.743768] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:58.167 [2024-07-15 19:49:23.743837] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:58.167 [2024-07-15 19:49:23.745397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.167 [2024-07-15 19:49:23.745430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.167 [2024-07-15 19:49:23.745458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.167 [2024-07-15 19:49:23.745468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.167 [2024-07-15 19:49:23.745477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.167 [2024-07-15 19:49:23.745486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.168 [2024-07-15 19:49:23.745496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.168 [2024-07-15 19:49:23.745504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.168 [2024-07-15 19:49:23.745513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.168 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.168 19:49:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:20:58.168 [2024-07-15 19:49:23.752220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.168 [2024-07-15 19:49:23.752352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.752374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.168 [2024-07-15 19:49:23.752385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.752418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.752450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.168 [2024-07-15 19:49:23.752459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.168 [2024-07-15 19:49:23.752471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.168 [2024-07-15 19:49:23.752488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.168 [2024-07-15 19:49:23.755359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.762302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.168 [2024-07-15 19:49:23.762415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.762435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.168 [2024-07-15 19:49:23.762445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.762460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.762473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.168 [2024-07-15 19:49:23.762481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.168 [2024-07-15 19:49:23.762489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.168 [2024-07-15 19:49:23.762503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.168 [2024-07-15 19:49:23.765375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.168 [2024-07-15 19:49:23.765485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.765504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.168 [2024-07-15 19:49:23.765514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.765528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.765541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.168 [2024-07-15 19:49:23.765549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.168 [2024-07-15 19:49:23.765558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.168 [2024-07-15 19:49:23.765571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.168 [2024-07-15 19:49:23.772369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.168 [2024-07-15 19:49:23.772475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.772494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.168 [2024-07-15 19:49:23.772504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.772525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.772538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.168 [2024-07-15 19:49:23.772546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.168 [2024-07-15 19:49:23.772554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.168 [2024-07-15 19:49:23.772583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.168 [2024-07-15 19:49:23.775453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.168 [2024-07-15 19:49:23.775563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.775582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.168 [2024-07-15 19:49:23.775592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.775606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.775628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.168 [2024-07-15 19:49:23.775637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.168 [2024-07-15 19:49:23.775645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.168 [2024-07-15 19:49:23.775658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.168 [2024-07-15 19:49:23.782415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.168 [2024-07-15 19:49:23.782523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.782557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.168 [2024-07-15 19:49:23.782566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.782580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.782592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.168 [2024-07-15 19:49:23.782603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.168 [2024-07-15 19:49:23.782612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.168 [2024-07-15 19:49:23.782624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.168 [2024-07-15 19:49:23.785517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.168 [2024-07-15 19:49:23.785620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.785639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.168 [2024-07-15 19:49:23.785649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.785662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.785684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.168 [2024-07-15 19:49:23.785692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.168 [2024-07-15 19:49:23.785701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.168 [2024-07-15 19:49:23.785713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.168 [2024-07-15 19:49:23.792480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.168 [2024-07-15 19:49:23.792599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.168 [2024-07-15 19:49:23.792620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.168 [2024-07-15 19:49:23.792629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.168 [2024-07-15 19:49:23.792643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.168 [2024-07-15 19:49:23.792656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.792664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.792672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.169 [2024-07-15 19:49:23.792685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.795576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.169 [2024-07-15 19:49:23.795686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.795704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.169 [2024-07-15 19:49:23.795714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.795728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.795750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.795759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.795768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.169 [2024-07-15 19:49:23.795795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.802574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.169 [2024-07-15 19:49:23.802683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.802701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.169 [2024-07-15 19:49:23.802711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.802725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.802737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.802745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.802753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.169 [2024-07-15 19:49:23.802766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.805641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.169 [2024-07-15 19:49:23.805744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.805762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.169 [2024-07-15 19:49:23.805772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.805786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.805820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.805830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.805838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.169 [2024-07-15 19:49:23.805850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.812639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.169 [2024-07-15 19:49:23.812748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.812767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.169 [2024-07-15 19:49:23.812777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.812791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.812818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.812827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.812836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.169 [2024-07-15 19:49:23.812849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.815701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.169 [2024-07-15 19:49:23.815804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.815823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.169 [2024-07-15 19:49:23.815832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.815856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.815892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.815901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.815909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.169 [2024-07-15 19:49:23.815930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.822704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.169 [2024-07-15 19:49:23.822810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.822828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.169 [2024-07-15 19:49:23.822838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.822852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.822864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.822872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.822879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.169 [2024-07-15 19:49:23.822906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.825762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.169 [2024-07-15 19:49:23.825864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.825883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.169 [2024-07-15 19:49:23.825892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.825905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.825952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.169 [2024-07-15 19:49:23.825961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.169 [2024-07-15 19:49:23.825998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.169 [2024-07-15 19:49:23.826013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.169 [2024-07-15 19:49:23.832770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.169 [2024-07-15 19:49:23.832893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.169 [2024-07-15 19:49:23.832912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.169 [2024-07-15 19:49:23.832922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.169 [2024-07-15 19:49:23.832936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.169 [2024-07-15 19:49:23.833011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.833026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.833035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.170 [2024-07-15 19:49:23.833049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.835823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.170 [2024-07-15 19:49:23.835933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.835953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.170 [2024-07-15 19:49:23.835963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.835977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.170 [2024-07-15 19:49:23.836020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.836031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.836039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.170 [2024-07-15 19:49:23.836053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.842837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.170 [2024-07-15 19:49:23.842926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.842945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.170 [2024-07-15 19:49:23.842954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.842968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.170 [2024-07-15 19:49:23.842980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.842988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.842996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.170 [2024-07-15 19:49:23.843008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.845886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.170 [2024-07-15 19:49:23.846001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.846020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.170 [2024-07-15 19:49:23.846030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.846044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.170 [2024-07-15 19:49:23.846079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.846088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.846097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.170 [2024-07-15 19:49:23.846110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.852900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.170 [2024-07-15 19:49:23.852991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.853009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.170 [2024-07-15 19:49:23.853019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.853032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.170 [2024-07-15 19:49:23.853044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.853051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.853059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.170 [2024-07-15 19:49:23.853071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.855929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.170 [2024-07-15 19:49:23.856017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.856036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.170 [2024-07-15 19:49:23.856045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.856067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.170 [2024-07-15 19:49:23.856089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.856098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.856106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.170 [2024-07-15 19:49:23.856133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.862953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.170 [2024-07-15 19:49:23.863240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.863264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.170 [2024-07-15 19:49:23.863276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.863310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.170 [2024-07-15 19:49:23.863325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.863334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.863343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.170 [2024-07-15 19:49:23.863358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.866000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:58.170 [2024-07-15 19:49:23.866095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.866114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1698300 with addr=10.0.0.3, port=4420 00:20:58.170 [2024-07-15 19:49:23.866124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698300 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.866139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1698300 (9): Bad file descriptor 00:20:58.170 [2024-07-15 19:49:23.866168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:58.170 [2024-07-15 19:49:23.866189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:58.170 [2024-07-15 19:49:23.866199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:58.170 [2024-07-15 19:49:23.866213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.170 [2024-07-15 19:49:23.873203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:58.170 [2024-07-15 19:49:23.873293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.170 [2024-07-15 19:49:23.873311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16dc080 with addr=10.0.0.2, port=4420 00:20:58.170 [2024-07-15 19:49:23.873321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc080 is same with the state(5) to be set 00:20:58.170 [2024-07-15 19:49:23.873360] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:20:58.170 [2024-07-15 19:49:23.873379] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:58.170 [2024-07-15 19:49:23.873401] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:58.170 [2024-07-15 19:49:23.873435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dc080 (9): Bad file descriptor 00:20:58.171 [2024-07-15 19:49:23.873461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:58.171 [2024-07-15 19:49:23.873469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:58.171 [2024-07-15 19:49:23.873478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:58.171 [2024-07-15 19:49:23.873504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.171 [2024-07-15 19:49:23.874310] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:58.171 [2024-07-15 19:49:23.874335] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:58.171 [2024-07-15 19:49:23.874353] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:58.428 [2024-07-15 19:49:23.959430] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:58.428 [2024-07-15 19:49:23.960422] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:58.993 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:59.250 19:49:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.508 19:49:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:20:59.508 [2024-07-15 19:49:25.145136] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:00.441 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:21:00.441 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:00.441 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.441 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:00.442 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.699 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:21:00.699 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.700 [2024-07-15 19:49:26.297006] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:00.700 2024/07/15 19:49:26 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:00.700 request: 00:21:00.700 { 00:21:00.700 "method": "bdev_nvme_start_mdns_discovery", 00:21:00.700 "params": { 00:21:00.700 "name": "mdns", 00:21:00.700 "svcname": "_nvme-disc._http", 00:21:00.700 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:00.700 } 00:21:00.700 } 00:21:00.700 Got JSON-RPC error response 00:21:00.700 GoRPCClient: error on JSON-RPC call 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:00.700 19:49:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:21:01.265 [2024-07-15 19:49:26.885673] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:01.265 [2024-07-15 19:49:26.985673] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:01.521 [2024-07-15 19:49:27.085694] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:01.521 [2024-07-15 19:49:27.085757] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:01.521 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.521 cookie is 0 00:21:01.521 is_local: 1 00:21:01.521 our_own: 0 00:21:01.521 wide_area: 0 00:21:01.521 multicast: 1 00:21:01.521 cached: 1 00:21:01.521 [2024-07-15 19:49:27.185695] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:01.521 [2024-07-15 19:49:27.185737] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:21:01.521 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.521 cookie is 0 00:21:01.521 is_local: 1 00:21:01.521 our_own: 0 00:21:01.521 wide_area: 0 00:21:01.521 multicast: 1 00:21:01.521 cached: 1 00:21:01.521 [2024-07-15 19:49:27.185767] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:01.521 [2024-07-15 19:49:27.285713] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:01.521 [2024-07-15 19:49:27.285771] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:01.521 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.521 cookie is 0 00:21:01.521 is_local: 1 00:21:01.521 our_own: 0 00:21:01.521 wide_area: 0 00:21:01.521 multicast: 1 00:21:01.521 cached: 1 00:21:01.778 [2024-07-15 19:49:27.385678] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:01.778 [2024-07-15 19:49:27.385703] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:21:01.778 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.778 cookie is 0 00:21:01.778 is_local: 1 00:21:01.778 our_own: 0 00:21:01.778 wide_area: 0 00:21:01.778 multicast: 1 00:21:01.778 cached: 1 00:21:01.778 [2024-07-15 19:49:27.385729] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:21:02.383 [2024-07-15 19:49:28.092687] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:02.383 [2024-07-15 19:49:28.092756] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:02.383 [2024-07-15 19:49:28.092793] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:02.639 [2024-07-15 19:49:28.178792] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:21:02.639 [2024-07-15 19:49:28.239278] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:21:02.639 [2024-07-15 19:49:28.239318] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:21:02.639 [2024-07-15 19:49:28.292524] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:02.639 [2024-07-15 19:49:28.292550] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:02.639 [2024-07-15 19:49:28.292583] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:02.639 [2024-07-15 19:49:28.378652] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:21:02.895 [2024-07-15 19:49:28.439098] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:02.895 [2024-07-15 19:49:28.439151] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:06.166 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 [2024-07-15 19:49:31.497077] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:06.167 2024/07/15 19:49:31 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:06.167 request: 00:21:06.167 { 00:21:06.167 "method": "bdev_nvme_start_mdns_discovery", 00:21:06.167 "params": { 00:21:06.167 "name": "cdc", 00:21:06.167 "svcname": "_nvme-disc._tcp", 00:21:06.167 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:06.167 } 00:21:06.167 } 00:21:06.167 Got JSON-RPC error response 00:21:06.167 GoRPCClient: error on JSON-RPC call 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94447 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94447 00:21:06.167 [2024-07-15 19:49:31.741903] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94476 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:21:06.167 Got SIGTERM, quitting. 00:21:06.167 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:21:06.167 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:21:06.167 avahi-daemon 0.8 exiting. 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.167 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.167 rmmod nvme_tcp 00:21:06.167 rmmod nvme_fabrics 00:21:06.167 rmmod nvme_keyring 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94397 ']' 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94397 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94397 ']' 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94397 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94397 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:06.438 killing process with pid 94397 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94397' 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94397 00:21:06.438 19:49:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94397 00:21:06.438 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.438 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.438 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.439 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.439 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.439 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.439 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.439 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.698 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:06.698 00:21:06.698 real 0m20.707s 00:21:06.698 user 0m40.549s 00:21:06.698 sys 0m2.022s 00:21:06.698 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.698 ************************************ 00:21:06.698 END TEST nvmf_mdns_discovery 00:21:06.698 ************************************ 00:21:06.698 19:49:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.698 19:49:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:06.698 19:49:32 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:21:06.698 19:49:32 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:06.698 19:49:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:06.698 19:49:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.698 19:49:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.698 ************************************ 00:21:06.698 START TEST nvmf_host_multipath 00:21:06.698 ************************************ 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:06.698 * Looking for test storage... 00:21:06.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:06.698 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:06.699 Cannot find device "nvmf_tgt_br" 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.699 Cannot find device "nvmf_tgt_br2" 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:06.699 Cannot find device "nvmf_tgt_br" 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:06.699 Cannot find device "nvmf_tgt_br2" 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:21:06.699 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:06.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:21:06.957 00:21:06.957 --- 10.0.0.2 ping statistics --- 00:21:06.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.957 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:06.957 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:06.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:06.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:21:06.958 00:21:06.958 --- 10.0.0.3 ping statistics --- 00:21:06.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.958 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:06.958 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:06.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:21:06.958 00:21:06.958 --- 10.0.0.1 ping statistics --- 00:21:06.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.958 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:21:06.958 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.958 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=95034 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 95034 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95034 ']' 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:07.217 19:49:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:07.217 [2024-07-15 19:49:32.826693] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:21:07.217 [2024-07-15 19:49:32.826838] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.217 [2024-07-15 19:49:32.967894] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:07.475 [2024-07-15 19:49:33.083309] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.475 [2024-07-15 19:49:33.083359] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.475 [2024-07-15 19:49:33.083370] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.475 [2024-07-15 19:49:33.083377] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.475 [2024-07-15 19:49:33.083384] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.475 [2024-07-15 19:49:33.083551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.475 [2024-07-15 19:49:33.083557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95034 00:21:08.409 19:49:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:08.409 [2024-07-15 19:49:34.129146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.409 19:49:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:08.666 Malloc0 00:21:08.667 19:49:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:08.924 19:49:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.203 19:49:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.462 [2024-07-15 19:49:35.196153] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.462 19:49:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:09.721 [2024-07-15 19:49:35.436356] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95134 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95134 /var/tmp/bdevperf.sock 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95134 ']' 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.721 19:49:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:11.094 19:49:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.094 19:49:36 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:11.094 19:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:11.094 19:49:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:11.353 Nvme0n1 00:21:11.611 19:49:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:11.870 Nvme0n1 00:21:11.870 19:49:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:11.870 19:49:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:12.801 19:49:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:12.801 19:49:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:13.059 19:49:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:13.317 19:49:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:13.317 19:49:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95227 00:21:13.317 19:49:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95034 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:13.317 19:49:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.903 Attaching 4 probes... 00:21:19.903 @path[10.0.0.2, 4421]: 17245 00:21:19.903 @path[10.0.0.2, 4421]: 17613 00:21:19.903 @path[10.0.0.2, 4421]: 17692 00:21:19.903 @path[10.0.0.2, 4421]: 17466 00:21:19.903 @path[10.0.0.2, 4421]: 17735 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95227 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:19.903 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:20.161 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:20.161 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95034 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:20.161 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95352 00:21:20.161 19:49:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:26.716 19:49:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:26.716 19:49:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:26.716 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.717 Attaching 4 probes... 00:21:26.717 @path[10.0.0.2, 4420]: 16943 00:21:26.717 @path[10.0.0.2, 4420]: 17136 00:21:26.717 @path[10.0.0.2, 4420]: 17491 00:21:26.717 @path[10.0.0.2, 4420]: 17569 00:21:26.717 @path[10.0.0.2, 4420]: 19054 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95352 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:26.717 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:26.975 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:26.975 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95488 00:21:26.975 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95034 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:26.975 19:49:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.532 Attaching 4 probes... 00:21:33.532 @path[10.0.0.2, 4421]: 13971 00:21:33.532 @path[10.0.0.2, 4421]: 19304 00:21:33.532 @path[10.0.0.2, 4421]: 19858 00:21:33.532 @path[10.0.0.2, 4421]: 19493 00:21:33.532 @path[10.0.0.2, 4421]: 19351 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95488 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:33.532 19:49:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:33.532 19:49:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:33.791 19:49:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:33.791 19:49:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95619 00:21:33.791 19:49:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95034 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:33.791 19:49:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.456 Attaching 4 probes... 00:21:40.456 00:21:40.456 00:21:40.456 00:21:40.456 00:21:40.456 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95619 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:40.456 19:50:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:40.456 19:50:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:40.714 19:50:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:40.714 19:50:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95034 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:40.714 19:50:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95749 00:21:40.714 19:50:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.271 Attaching 4 probes... 00:21:47.271 @path[10.0.0.2, 4421]: 17937 00:21:47.271 @path[10.0.0.2, 4421]: 18952 00:21:47.271 @path[10.0.0.2, 4421]: 18314 00:21:47.271 @path[10.0.0.2, 4421]: 17890 00:21:47.271 @path[10.0.0.2, 4421]: 18155 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95749 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.271 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:47.271 [2024-07-15 19:50:12.802337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.271 [2024-07-15 19:50:12.802639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 [2024-07-15 19:50:12.802929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e9f0 is same with the state(5) to be set 00:21:47.272 19:50:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:48.205 19:50:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:48.205 19:50:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95885 00:21:48.205 19:50:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95034 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:48.205 19:50:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:54.813 19:50:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:54.813 19:50:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.813 Attaching 4 probes... 00:21:54.813 @path[10.0.0.2, 4420]: 18637 00:21:54.813 @path[10.0.0.2, 4420]: 18824 00:21:54.813 @path[10.0.0.2, 4420]: 18838 00:21:54.813 @path[10.0.0.2, 4420]: 18628 00:21:54.813 @path[10.0.0.2, 4420]: 17398 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95885 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.813 [2024-07-15 19:50:20.401688] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.813 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:55.072 19:50:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:01.634 19:50:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:01.634 19:50:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96074 00:22:01.634 19:50:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95034 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:01.634 19:50:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:06.961 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:06.961 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.218 Attaching 4 probes... 00:22:07.218 @path[10.0.0.2, 4421]: 18299 00:22:07.218 @path[10.0.0.2, 4421]: 18895 00:22:07.218 @path[10.0.0.2, 4421]: 19103 00:22:07.218 @path[10.0.0.2, 4421]: 19070 00:22:07.218 @path[10.0.0.2, 4421]: 18522 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96074 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95134 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95134 ']' 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95134 00:22:07.218 19:50:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95134 00:22:07.482 killing process with pid 95134 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95134' 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95134 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95134 00:22:07.482 Connection closed with partial response: 00:22:07.482 00:22:07.482 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95134 00:22:07.482 19:50:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:07.482 [2024-07-15 19:49:35.503366] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:22:07.482 [2024-07-15 19:49:35.503607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95134 ] 00:22:07.482 [2024-07-15 19:49:35.635396] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.482 [2024-07-15 19:49:35.742481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.482 Running I/O for 90 seconds... 00:22:07.482 [2024-07-15 19:49:45.913009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.482 [2024-07-15 19:49:45.913085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.482 [2024-07-15 19:49:45.913214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.913257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.913294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.913330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.913366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.913402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.913437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.913459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.913474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.914968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.914987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.915485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.915499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.916205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.916234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.916277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.916294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.916335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.916354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.916376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.916391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.916413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.916428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.916449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.916464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:07.482 [2024-07-15 19:49:45.916486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.482 [2024-07-15 19:49:45.916500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.916975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.916989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.917949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.917962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.918722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.918743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:07.483 [2024-07-15 19:49:45.919956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.483 [2024-07-15 19:49:45.919970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.919990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:45.920777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:45.920792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.430736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.484 [2024-07-15 19:49:52.430802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.430881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.430903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.430928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.430944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.430965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.430980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.431971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.431992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.432006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.432033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.432048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.432069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.432083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.432118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.432140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.432155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.432430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.484 [2024-07-15 19:49:52.432467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:07.484 [2024-07-15 19:49:52.432497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.432975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.432997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.433707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.433721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.434931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.434955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.435961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.435989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:52.436542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:52.436557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.485 [2024-07-15 19:49:59.451486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.485 [2024-07-15 19:49:59.451560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:07.485 [2024-07-15 19:49:59.451597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.451611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.451631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.451645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.451665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.451678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.451698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.451712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.451732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.451746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.451766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.451781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.452025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.486 [2024-07-15 19:49:59.452066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.452971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.452985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.453959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.453972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:07.486 [2024-07-15 19:49:59.454608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.486 [2024-07-15 19:49:59.454622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.487 [2024-07-15 19:49:59.454656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.454970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.454991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.455968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:07.487 [2024-07-15 19:49:59.455995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.487 [2024-07-15 19:49:59.456010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:49:59.456912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:49:59.456927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.804983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.488 [2024-07-15 19:50:12.804995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.488 [2024-07-15 19:50:12.805800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.488 [2024-07-15 19:50:12.805815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.805829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.805844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.805858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.805873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.805886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.805901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.805915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.805930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.805944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.805959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.805972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.805988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.806976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.806989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.489 [2024-07-15 19:50:12.807017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.807974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.807992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.808006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.808019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.808033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.489 [2024-07-15 19:50:12.808045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.489 [2024-07-15 19:50:12.808060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.490 [2024-07-15 19:50:12.808477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.490 [2024-07-15 19:50:12.808554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18776 len:8 PRP1 0x0 PRP2 0x0 00:22:07.490 [2024-07-15 19:50:12.808567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.490 [2024-07-15 19:50:12.808601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.490 [2024-07-15 19:50:12.808612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:8 PRP1 0x0 PRP2 0x0 00:22:07.490 [2024-07-15 19:50:12.808626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.490 [2024-07-15 19:50:12.808649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.490 [2024-07-15 19:50:12.808665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18792 len:8 PRP1 0x0 PRP2 0x0 00:22:07.490 [2024-07-15 19:50:12.808678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.490 [2024-07-15 19:50:12.808702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.490 [2024-07-15 19:50:12.808712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18800 len:8 PRP1 0x0 PRP2 0x0 00:22:07.490 [2024-07-15 19:50:12.808725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.490 [2024-07-15 19:50:12.808750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.490 [2024-07-15 19:50:12.808760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18808 len:8 PRP1 0x0 PRP2 0x0 00:22:07.490 [2024-07-15 19:50:12.808773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.490 [2024-07-15 19:50:12.808805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.490 [2024-07-15 19:50:12.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:8 PRP1 0x0 PRP2 0x0 00:22:07.490 [2024-07-15 19:50:12.808829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.490 [2024-07-15 19:50:12.808852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.490 [2024-07-15 19:50:12.808863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18824 len:8 PRP1 0x0 PRP2 0x0 00:22:07.490 [2024-07-15 19:50:12.808876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.808934] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19c3240 was disconnected and freed. reset controller. 00:22:07.490 [2024-07-15 19:50:12.809039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.490 [2024-07-15 19:50:12.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.809079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.490 [2024-07-15 19:50:12.809093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.809107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.490 [2024-07-15 19:50:12.809120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.809134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.490 [2024-07-15 19:50:12.809147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.809161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.490 [2024-07-15 19:50:12.809176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.490 [2024-07-15 19:50:12.819026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b614a0 is same with the state(5) to be set 00:22:07.490 [2024-07-15 19:50:12.820994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:07.490 [2024-07-15 19:50:12.821061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b614a0 (9): Bad file descriptor 00:22:07.490 [2024-07-15 19:50:12.821318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-07-15 19:50:12.821359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b614a0 with addr=10.0.0.2, port=4421 00:22:07.490 [2024-07-15 19:50:12.821392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b614a0 is same with the state(5) to be set 00:22:07.490 [2024-07-15 19:50:12.821424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b614a0 (9): Bad file descriptor 00:22:07.490 [2024-07-15 19:50:12.821453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:07.490 [2024-07-15 19:50:12.821471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:07.490 [2024-07-15 19:50:12.821506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:07.490 [2024-07-15 19:50:12.821541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.490 [2024-07-15 19:50:12.821559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:07.490 [2024-07-15 19:50:22.902131] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:07.490 Received shutdown signal, test time was about 55.415864 seconds 00:22:07.490 00:22:07.490 Latency(us) 00:22:07.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.490 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:07.490 Verification LBA range: start 0x0 length 0x4000 00:22:07.490 Nvme0n1 : 55.42 7756.34 30.30 0.00 0.00 16473.89 318.37 7046430.72 00:22:07.490 =================================================================================================================== 00:22:07.490 Total : 7756.34 30.30 0.00 0.00 16473.89 318.37 7046430.72 00:22:07.490 19:50:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:07.747 19:50:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:07.747 19:50:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:07.747 19:50:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:07.747 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:07.747 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.004 rmmod nvme_tcp 00:22:08.004 rmmod nvme_fabrics 00:22:08.004 rmmod nvme_keyring 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 95034 ']' 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 95034 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95034 ']' 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95034 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95034 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.004 killing process with pid 95034 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95034' 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95034 00:22:08.004 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95034 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:08.261 00:22:08.261 real 1m1.615s 00:22:08.261 user 2m55.024s 00:22:08.261 sys 0m13.516s 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.261 19:50:33 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 ************************************ 00:22:08.261 END TEST nvmf_host_multipath 00:22:08.261 ************************************ 00:22:08.261 19:50:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.261 19:50:33 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:08.261 19:50:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.261 19:50:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.261 19:50:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.261 ************************************ 00:22:08.261 START TEST nvmf_timeout 00:22:08.261 ************************************ 00:22:08.261 19:50:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:08.261 * Looking for test storage... 00:22:08.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:08.261 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:08.261 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.519 19:50:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:08.520 Cannot find device "nvmf_tgt_br" 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:08.520 Cannot find device "nvmf_tgt_br2" 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:08.520 Cannot find device "nvmf_tgt_br" 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:08.520 Cannot find device "nvmf_tgt_br2" 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:08.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:08.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:08.520 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:08.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:22:08.778 00:22:08.778 --- 10.0.0.2 ping statistics --- 00:22:08.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.778 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:08.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:08.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:22:08.778 00:22:08.778 --- 10.0.0.3 ping statistics --- 00:22:08.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.778 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:08.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:08.778 00:22:08.778 --- 10.0.0.1 ping statistics --- 00:22:08.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.778 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96406 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96406 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96406 ']' 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.778 19:50:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:08.778 [2024-07-15 19:50:34.509582] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:22:08.778 [2024-07-15 19:50:34.509680] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.036 [2024-07-15 19:50:34.649952] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:09.036 [2024-07-15 19:50:34.732968] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.036 [2024-07-15 19:50:34.733034] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.036 [2024-07-15 19:50:34.733060] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.036 [2024-07-15 19:50:34.733068] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.036 [2024-07-15 19:50:34.733074] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.036 [2024-07-15 19:50:34.733459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.036 [2024-07-15 19:50:34.733529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.968 19:50:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:10.226 [2024-07-15 19:50:35.781479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.226 19:50:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:10.482 Malloc0 00:22:10.482 19:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.739 19:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.997 19:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.255 [2024-07-15 19:50:36.827286] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96497 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96497 /var/tmp/bdevperf.sock 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96497 ']' 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.255 19:50:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:11.255 [2024-07-15 19:50:36.887997] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:22:11.255 [2024-07-15 19:50:36.888082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96497 ] 00:22:11.255 [2024-07-15 19:50:37.018668] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.512 [2024-07-15 19:50:37.129299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.077 19:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.077 19:50:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:12.077 19:50:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:12.335 19:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:12.901 NVMe0n1 00:22:12.901 19:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96545 00:22:12.901 19:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.901 19:50:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:12.901 Running I/O for 10 seconds... 00:22:13.902 19:50:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.902 [2024-07-15 19:50:39.641908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.641973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.902 [2024-07-15 19:50:39.642714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.642999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.643016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.643024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.643031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.643039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.643057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bd0 is same with the state(5) to be set 00:22:13.903 [2024-07-15 19:50:39.643546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.643979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.643990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.644000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.644011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.644021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.644032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.644042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.644053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.644063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.644074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.644083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.644095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.644117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.903 [2024-07-15 19:50:39.644126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.903 [2024-07-15 19:50:39.644138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.904 [2024-07-15 19:50:39.644958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.644982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.644993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.645003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.645014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.645024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.645035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.645045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.645056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.645065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.645077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.645086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.645097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.645106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.645118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.904 [2024-07-15 19:50:39.645127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.904 [2024-07-15 19:50:39.645140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.905 [2024-07-15 19:50:39.645955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.905 [2024-07-15 19:50:39.645980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.645992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.905 [2024-07-15 19:50:39.646001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.646025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.905 [2024-07-15 19:50:39.646042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.905 [2024-07-15 19:50:39.646053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.906 [2024-07-15 19:50:39.646347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b86f0 is same with the state(5) to be set 00:22:13.906 [2024-07-15 19:50:39.646376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:13.906 [2024-07-15 19:50:39.646384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:13.906 [2024-07-15 19:50:39.646398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87912 len:8 PRP1 0x0 PRP2 0x0 00:22:13.906 [2024-07-15 19:50:39.646408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.906 [2024-07-15 19:50:39.646462] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12b86f0 was disconnected and freed. reset controller. 00:22:13.906 [2024-07-15 19:50:39.646685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:13.906 [2024-07-15 19:50:39.646775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12493e0 (9): Bad file descriptor 00:22:13.906 [2024-07-15 19:50:39.646897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.906 [2024-07-15 19:50:39.646929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12493e0 with addr=10.0.0.2, port=4420 00:22:13.906 [2024-07-15 19:50:39.646940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12493e0 is same with the state(5) to be set 00:22:13.906 [2024-07-15 19:50:39.646959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12493e0 (9): Bad file descriptor 00:22:13.906 [2024-07-15 19:50:39.646987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:13.906 [2024-07-15 19:50:39.646997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:13.906 [2024-07-15 19:50:39.647008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:13.906 [2024-07-15 19:50:39.647038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.906 [2024-07-15 19:50:39.647049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:13.906 19:50:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:16.436 [2024-07-15 19:50:41.647229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.436 [2024-07-15 19:50:41.647308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12493e0 with addr=10.0.0.2, port=4420 00:22:16.436 [2024-07-15 19:50:41.647326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12493e0 is same with the state(5) to be set 00:22:16.436 [2024-07-15 19:50:41.647352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12493e0 (9): Bad file descriptor 00:22:16.436 [2024-07-15 19:50:41.647371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.436 [2024-07-15 19:50:41.647381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:16.436 [2024-07-15 19:50:41.647397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.436 [2024-07-15 19:50:41.647440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.436 [2024-07-15 19:50:41.647451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.436 19:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:16.436 19:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:16.436 19:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:16.436 19:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:16.436 19:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:16.436 19:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:16.436 19:50:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:16.436 19:50:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:16.436 19:50:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:18.335 [2024-07-15 19:50:43.647631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.335 [2024-07-15 19:50:43.647724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12493e0 with addr=10.0.0.2, port=4420 00:22:18.335 [2024-07-15 19:50:43.647741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12493e0 is same with the state(5) to be set 00:22:18.335 [2024-07-15 19:50:43.647766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12493e0 (9): Bad file descriptor 00:22:18.335 [2024-07-15 19:50:43.647785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:18.335 [2024-07-15 19:50:43.647795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:18.335 [2024-07-15 19:50:43.647805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:18.335 [2024-07-15 19:50:43.647845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.335 [2024-07-15 19:50:43.647857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:20.235 [2024-07-15 19:50:45.647968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.235 [2024-07-15 19:50:45.648037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:20.235 [2024-07-15 19:50:45.648065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:20.235 [2024-07-15 19:50:45.648075] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:20.235 [2024-07-15 19:50:45.648102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:21.169 00:22:21.169 Latency(us) 00:22:21.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.169 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.169 Verification LBA range: start 0x0 length 0x4000 00:22:21.169 NVMe0n1 : 8.14 1340.70 5.24 15.73 0.00 94221.60 2055.45 7015926.69 00:22:21.169 =================================================================================================================== 00:22:21.169 Total : 1340.70 5.24 15.73 0.00 94221.60 2055.45 7015926.69 00:22:21.169 0 00:22:21.426 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:21.426 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.426 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:21.684 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:21.684 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:21.684 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:21.684 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96545 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96497 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96497 ']' 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96497 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.942 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96497 00:22:22.200 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:22.200 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:22.200 killing process with pid 96497 00:22:22.200 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96497' 00:22:22.200 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96497 00:22:22.200 Received shutdown signal, test time was about 9.233753 seconds 00:22:22.200 00:22:22.200 Latency(us) 00:22:22.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.200 =================================================================================================================== 00:22:22.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.200 19:50:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96497 00:22:22.200 19:50:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.458 [2024-07-15 19:50:48.172656] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96697 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96697 /var/tmp/bdevperf.sock 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96697 ']' 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.458 19:50:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:22.458 [2024-07-15 19:50:48.235342] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:22:22.458 [2024-07-15 19:50:48.235425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96697 ] 00:22:22.715 [2024-07-15 19:50:48.366555] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.715 [2024-07-15 19:50:48.463484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.648 19:50:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.648 19:50:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:23.648 19:50:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:23.648 19:50:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:24.214 NVMe0n1 00:22:24.214 19:50:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96745 00:22:24.214 19:50:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.214 19:50:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:24.214 Running I/O for 10 seconds... 00:22:25.149 19:50:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.414 [2024-07-15 19:50:50.934276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.414 [2024-07-15 19:50:50.934890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.415 [2024-07-15 19:50:50.934899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.415 [2024-07-15 19:50:50.934907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.415 [2024-07-15 19:50:50.934915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.415 [2024-07-15 19:50:50.934923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f5b30 is same with the state(5) to be set 00:22:25.415 [2024-07-15 19:50:50.935357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.935988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.935997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.415 [2024-07-15 19:50:50.936284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.415 [2024-07-15 19:50:50.936295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.416 [2024-07-15 19:50:50.936962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.936982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.936993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.416 [2024-07-15 19:50:50.937205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.416 [2024-07-15 19:50:50.937214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.937985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.937997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.417 [2024-07-15 19:50:50.938006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.938064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.417 [2024-07-15 19:50:50.938078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:22:25.417 [2024-07-15 19:50:50.938087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.938101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.417 [2024-07-15 19:50:50.938109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.417 [2024-07-15 19:50:50.938118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:22:25.417 [2024-07-15 19:50:50.938127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.938137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.417 [2024-07-15 19:50:50.938144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.417 [2024-07-15 19:50:50.938152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:22:25.417 [2024-07-15 19:50:50.938161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.417 [2024-07-15 19:50:50.938181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.417 [2024-07-15 19:50:50.938190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.418 [2024-07-15 19:50:50.938198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89712 len:8 PRP1 0x0 PRP2 0x0 00:22:25.418 [2024-07-15 19:50:50.938208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.418 [2024-07-15 19:50:50.938217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.418 [2024-07-15 19:50:50.938225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.418 [2024-07-15 19:50:50.938234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:22:25.418 [2024-07-15 19:50:50.938243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.418 [2024-07-15 19:50:50.938252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.418 [2024-07-15 19:50:50.938259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.418 [2024-07-15 19:50:50.938268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89728 len:8 PRP1 0x0 PRP2 0x0 00:22:25.418 [2024-07-15 19:50:50.938277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.418 [2024-07-15 19:50:50.938287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.418 [2024-07-15 19:50:50.938294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.418 [2024-07-15 19:50:50.938302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89736 len:8 PRP1 0x0 PRP2 0x0 00:22:25.418 [2024-07-15 19:50:50.938311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.418 [2024-07-15 19:50:50.938320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.418 [2024-07-15 19:50:50.938328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.418 [2024-07-15 19:50:50.938336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89744 len:8 PRP1 0x0 PRP2 0x0 00:22:25.418 [2024-07-15 19:50:50.938345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.418 [2024-07-15 19:50:50.938398] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfe36f0 was disconnected and freed. reset controller. 00:22:25.418 [2024-07-15 19:50:50.938635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.418 [2024-07-15 19:50:50.938722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:25.418 [2024-07-15 19:50:50.938834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.418 [2024-07-15 19:50:50.938856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf743e0 with addr=10.0.0.2, port=4420 00:22:25.418 [2024-07-15 19:50:50.938867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf743e0 is same with the state(5) to be set 00:22:25.418 [2024-07-15 19:50:50.938886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:25.418 [2024-07-15 19:50:50.938903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.418 [2024-07-15 19:50:50.956520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:25.418 [2024-07-15 19:50:50.956583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.418 [2024-07-15 19:50:50.956636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.418 [2024-07-15 19:50:50.956656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.418 19:50:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:26.394 [2024-07-15 19:50:51.956858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.394 [2024-07-15 19:50:51.956951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf743e0 with addr=10.0.0.2, port=4420 00:22:26.394 [2024-07-15 19:50:51.956969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf743e0 is same with the state(5) to be set 00:22:26.394 [2024-07-15 19:50:51.956995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:26.394 [2024-07-15 19:50:51.957013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.394 [2024-07-15 19:50:51.957023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:26.394 [2024-07-15 19:50:51.957033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.394 [2024-07-15 19:50:51.957072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.394 [2024-07-15 19:50:51.957086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.394 19:50:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:26.652 [2024-07-15 19:50:52.211376] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.652 19:50:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96745 00:22:27.219 [2024-07-15 19:50:52.977182] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:35.343 00:22:35.343 Latency(us) 00:22:35.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.343 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.343 Verification LBA range: start 0x0 length 0x4000 00:22:35.343 NVMe0n1 : 10.01 6866.71 26.82 0.00 0.00 18599.33 1966.08 3019898.88 00:22:35.343 =================================================================================================================== 00:22:35.343 Total : 6866.71 26.82 0.00 0.00 18599.33 1966.08 3019898.88 00:22:35.343 0 00:22:35.343 19:50:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96862 00:22:35.343 19:50:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.343 19:50:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:35.343 Running I/O for 10 seconds... 00:22:35.343 19:51:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.343 [2024-07-15 19:51:01.088855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.088925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.088954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.088964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.088972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.088981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.088989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.088997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.343 [2024-07-15 19:51:01.089198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.089513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214e840 is same with the state(5) to be set 00:22:35.344 [2024-07-15 19:51:01.090924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.344 [2024-07-15 19:51:01.090980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.091987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.091998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.092008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.092020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.344 [2024-07-15 19:51:01.092030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.092042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.344 [2024-07-15 19:51:01.092051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.092063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.344 [2024-07-15 19:51:01.092072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.092084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.344 [2024-07-15 19:51:01.092094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.344 [2024-07-15 19:51:01.092105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.344 [2024-07-15 19:51:01.092115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.092955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.092988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.092998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.093019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.093040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.093066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.093087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.345 [2024-07-15 19:51:01.093108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.345 [2024-07-15 19:51:01.093658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.345 [2024-07-15 19:51:01.093669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.346 [2024-07-15 19:51:01.093679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.346 [2024-07-15 19:51:01.093690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.346 [2024-07-15 19:51:01.093699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.346 [2024-07-15 19:51:01.093710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.346 [2024-07-15 19:51:01.093720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.346 [2024-07-15 19:51:01.093731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.346 [2024-07-15 19:51:01.093740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.346 [2024-07-15 19:51:01.093769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.346 [2024-07-15 19:51:01.093785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.346 [2024-07-15 19:51:01.093794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87192 len:8 PRP1 0x0 PRP2 0x0 00:22:35.346 [2024-07-15 19:51:01.093803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.346 [2024-07-15 19:51:01.093857] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfe88f0 was disconnected and freed. reset controller. 00:22:35.346 [2024-07-15 19:51:01.094106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:35.346 [2024-07-15 19:51:01.094222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:35.346 [2024-07-15 19:51:01.094338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.346 [2024-07-15 19:51:01.094362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf743e0 with addr=10.0.0.2, port=4420 00:22:35.346 [2024-07-15 19:51:01.094374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf743e0 is same with the state(5) to be set 00:22:35.346 [2024-07-15 19:51:01.094392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:35.346 [2024-07-15 19:51:01.094409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:35.346 [2024-07-15 19:51:01.094418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:35.346 [2024-07-15 19:51:01.094429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:35.346 [2024-07-15 19:51:01.094449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:35.346 [2024-07-15 19:51:01.094461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:35.346 19:51:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:36.715 [2024-07-15 19:51:02.094584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.715 [2024-07-15 19:51:02.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf743e0 with addr=10.0.0.2, port=4420 00:22:36.715 [2024-07-15 19:51:02.094694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf743e0 is same with the state(5) to be set 00:22:36.715 [2024-07-15 19:51:02.094721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:36.715 [2024-07-15 19:51:02.094740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:36.715 [2024-07-15 19:51:02.094750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:36.715 [2024-07-15 19:51:02.094760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.715 [2024-07-15 19:51:02.094818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.715 [2024-07-15 19:51:02.094830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.648 [2024-07-15 19:51:03.094977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.648 [2024-07-15 19:51:03.095094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf743e0 with addr=10.0.0.2, port=4420 00:22:37.648 [2024-07-15 19:51:03.095112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf743e0 is same with the state(5) to be set 00:22:37.648 [2024-07-15 19:51:03.095136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:37.648 [2024-07-15 19:51:03.095155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:37.648 [2024-07-15 19:51:03.095164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:37.648 [2024-07-15 19:51:03.095187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:37.648 [2024-07-15 19:51:03.095232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.648 [2024-07-15 19:51:03.095246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.578 [2024-07-15 19:51:04.098575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.578 [2024-07-15 19:51:04.098678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf743e0 with addr=10.0.0.2, port=4420 00:22:38.578 [2024-07-15 19:51:04.098694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf743e0 is same with the state(5) to be set 00:22:38.578 [2024-07-15 19:51:04.098957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf743e0 (9): Bad file descriptor 00:22:38.578 [2024-07-15 19:51:04.099233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.578 [2024-07-15 19:51:04.099256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:38.578 [2024-07-15 19:51:04.099268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.578 [2024-07-15 19:51:04.102995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.578 [2024-07-15 19:51:04.103043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.578 19:51:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.578 [2024-07-15 19:51:04.350587] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.837 19:51:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96862 00:22:39.401 [2024-07-15 19:51:05.136912] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:44.665 00:22:44.665 Latency(us) 00:22:44.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.665 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.665 Verification LBA range: start 0x0 length 0x4000 00:22:44.665 NVMe0n1 : 10.01 5045.87 19.71 4036.82 0.00 14067.99 1936.29 3019898.88 00:22:44.665 =================================================================================================================== 00:22:44.665 Total : 5045.87 19.71 4036.82 0.00 14067.99 0.00 3019898.88 00:22:44.665 0 00:22:44.665 19:51:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96697 00:22:44.665 19:51:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96697 ']' 00:22:44.665 19:51:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96697 00:22:44.665 19:51:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:44.665 19:51:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.665 19:51:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96697 00:22:44.665 killing process with pid 96697 00:22:44.665 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.665 00:22:44.665 Latency(us) 00:22:44.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.665 =================================================================================================================== 00:22:44.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96697' 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96697 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96697 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96989 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96989 /var/tmp/bdevperf.sock 00:22:44.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96989 ']' 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.665 19:51:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.665 [2024-07-15 19:51:10.291771] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:22:44.665 [2024-07-15 19:51:10.292095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96989 ] 00:22:44.665 [2024-07-15 19:51:10.430186] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.924 [2024-07-15 19:51:10.549349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.860 19:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.860 19:51:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:45.860 19:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97016 00:22:45.860 19:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96989 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:45.860 19:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:45.860 19:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:46.119 NVMe0n1 00:22:46.119 19:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97071 00:22:46.119 19:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:46.119 19:51:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.377 Running I/O for 10 seconds... 00:22:47.316 19:51:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.316 [2024-07-15 19:51:13.081393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151950 is same with the state(5) to be set 00:22:47.316 [2024-07-15 19:51:13.081922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.081952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.081973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.081983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.081995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.082005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.082016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.082051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.082065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.082075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.082086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.082096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.082107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.082117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.082129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.316 [2024-07-15 19:51:13.082138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.316 [2024-07-15 19:51:13.082150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.082982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.082993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.083002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.083014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.083024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.083036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.083045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.083057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.083066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.083081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.317 [2024-07-15 19:51:13.083091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.317 [2024-07-15 19:51:13.083102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.083987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.083999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.084008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.084019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.084028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.318 [2024-07-15 19:51:13.084039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.318 [2024-07-15 19:51:13.084055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.319 [2024-07-15 19:51:13.084808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.319 [2024-07-15 19:51:13.084846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.319 [2024-07-15 19:51:13.084855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67208 len:8 PRP1 0x0 PRP2 0x0 00:22:47.319 [2024-07-15 19:51:13.084864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.084918] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a16f0 was disconnected and freed. reset controller. 00:22:47.319 [2024-07-15 19:51:13.084999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.319 [2024-07-15 19:51:13.085015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.085026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.319 [2024-07-15 19:51:13.085035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.085045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.319 [2024-07-15 19:51:13.085055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.085065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.319 [2024-07-15 19:51:13.085074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.319 [2024-07-15 19:51:13.085083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21323e0 is same with the state(5) to be set 00:22:47.319 [2024-07-15 19:51:13.085372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.319 [2024-07-15 19:51:13.085408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21323e0 (9): Bad file descriptor 00:22:47.319 [2024-07-15 19:51:13.085556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.319 [2024-07-15 19:51:13.085578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21323e0 with addr=10.0.0.2, port=4420 00:22:47.319 [2024-07-15 19:51:13.085590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21323e0 is same with the state(5) to be set 00:22:47.319 [2024-07-15 19:51:13.085609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21323e0 (9): Bad file descriptor 00:22:47.319 [2024-07-15 19:51:13.085625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:47.319 [2024-07-15 19:51:13.085635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:47.320 [2024-07-15 19:51:13.085645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.320 [2024-07-15 19:51:13.085665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.320 [2024-07-15 19:51:13.085676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.579 19:51:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 97071 00:22:49.501 [2024-07-15 19:51:15.085898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.501 [2024-07-15 19:51:15.085988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21323e0 with addr=10.0.0.2, port=4420 00:22:49.501 [2024-07-15 19:51:15.086005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21323e0 is same with the state(5) to be set 00:22:49.501 [2024-07-15 19:51:15.086055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21323e0 (9): Bad file descriptor 00:22:49.501 [2024-07-15 19:51:15.086078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.501 [2024-07-15 19:51:15.086089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.501 [2024-07-15 19:51:15.086101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.501 [2024-07-15 19:51:15.086128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.501 [2024-07-15 19:51:15.086140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.412 [2024-07-15 19:51:17.086348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.412 [2024-07-15 19:51:17.086464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21323e0 with addr=10.0.0.2, port=4420 00:22:51.412 [2024-07-15 19:51:17.086487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21323e0 is same with the state(5) to be set 00:22:51.412 [2024-07-15 19:51:17.086513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21323e0 (9): Bad file descriptor 00:22:51.412 [2024-07-15 19:51:17.086532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.412 [2024-07-15 19:51:17.086549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:51.412 [2024-07-15 19:51:17.086559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.412 [2024-07-15 19:51:17.086586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.412 [2024-07-15 19:51:17.086597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.312 [2024-07-15 19:51:19.086669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.312 [2024-07-15 19:51:19.086735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.312 [2024-07-15 19:51:19.086765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:53.312 [2024-07-15 19:51:19.086775] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:53.312 [2024-07-15 19:51:19.086803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.709 00:22:54.709 Latency(us) 00:22:54.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.710 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:54.710 NVMe0n1 : 8.09 2453.88 9.59 15.83 0.00 51784.94 2263.97 7015926.69 00:22:54.710 =================================================================================================================== 00:22:54.710 Total : 2453.88 9.59 15.83 0.00 51784.94 2263.97 7015926.69 00:22:54.710 0 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.710 Attaching 5 probes... 00:22:54.710 1243.464005: reset bdev controller NVMe0 00:22:54.710 1243.543434: reconnect bdev controller NVMe0 00:22:54.710 3243.869031: reconnect delay bdev controller NVMe0 00:22:54.710 3243.887786: reconnect bdev controller NVMe0 00:22:54.710 5244.306668: reconnect delay bdev controller NVMe0 00:22:54.710 5244.327486: reconnect bdev controller NVMe0 00:22:54.710 7244.726276: reconnect delay bdev controller NVMe0 00:22:54.710 7244.746458: reconnect bdev controller NVMe0 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 97016 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96989 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96989 ']' 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96989 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96989 00:22:54.710 killing process with pid 96989 00:22:54.710 Received shutdown signal, test time was about 8.147708 seconds 00:22:54.710 00:22:54.710 Latency(us) 00:22:54.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.710 =================================================================================================================== 00:22:54.710 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96989' 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96989 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96989 00:22:54.710 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.967 rmmod nvme_tcp 00:22:54.967 rmmod nvme_fabrics 00:22:54.967 rmmod nvme_keyring 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96406 ']' 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96406 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96406 ']' 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96406 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96406 00:22:54.967 killing process with pid 96406 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96406' 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96406 00:22:54.967 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96406 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.225 19:51:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.483 19:51:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:55.483 ************************************ 00:22:55.483 END TEST nvmf_timeout 00:22:55.483 ************************************ 00:22:55.483 00:22:55.483 real 0m47.055s 00:22:55.483 user 2m18.549s 00:22:55.483 sys 0m4.911s 00:22:55.483 19:51:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.483 19:51:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.483 19:51:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:55.483 19:51:21 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:22:55.483 19:51:21 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:22:55.483 19:51:21 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.483 19:51:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.483 19:51:21 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:22:55.483 00:22:55.483 real 15m41.848s 00:22:55.483 user 41m39.066s 00:22:55.483 sys 3m23.681s 00:22:55.483 19:51:21 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.483 19:51:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.483 ************************************ 00:22:55.483 END TEST nvmf_tcp 00:22:55.483 ************************************ 00:22:55.483 19:51:21 -- common/autotest_common.sh@1142 -- # return 0 00:22:55.483 19:51:21 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:22:55.483 19:51:21 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:55.483 19:51:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:55.483 19:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.483 19:51:21 -- common/autotest_common.sh@10 -- # set +x 00:22:55.483 ************************************ 00:22:55.483 START TEST spdkcli_nvmf_tcp 00:22:55.483 ************************************ 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:55.483 * Looking for test storage... 00:22:55.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.483 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=97287 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 97287 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 97287 ']' 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.484 19:51:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.742 [2024-07-15 19:51:21.346515] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:22:55.742 [2024-07-15 19:51:21.346663] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97287 ] 00:22:55.742 [2024-07-15 19:51:21.483013] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:56.052 [2024-07-15 19:51:21.561717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.052 [2024-07-15 19:51:21.561723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.616 19:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.616 19:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:22:56.616 19:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:56.616 19:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.616 19:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.616 19:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:56.616 19:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:56.617 19:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:56.617 19:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:56.617 19:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.901 19:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:56.901 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:56.901 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:56.901 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:56.901 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:56.901 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:56.901 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:56.901 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:56.901 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:56.901 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:56.901 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:56.901 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:56.901 ' 00:22:59.432 [2024-07-15 19:51:25.074421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.809 [2024-07-15 19:51:26.351416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:23:03.369 [2024-07-15 19:51:28.700925] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:23:05.296 [2024-07-15 19:51:30.698169] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:23:06.672 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:23:06.672 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:23:06.672 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:23:06.672 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:23:06.672 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:23:06.672 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:23:06.672 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:23:06.672 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:06.672 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:06.672 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:23:06.672 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:23:06.672 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:23:06.672 19:51:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.240 19:51:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:23:07.240 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:23:07.240 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:07.240 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:23:07.240 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:23:07.240 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:23:07.240 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:23:07.240 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:07.240 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:23:07.240 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:23:07.240 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:23:07.240 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:23:07.240 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:23:07.240 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:23:07.240 ' 00:23:12.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:12.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:12.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:12.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:12.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:23:12.530 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:23:12.530 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:12.530 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:12.530 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:12.530 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:12.530 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:12.530 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:12.530 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:12.530 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:12.530 19:51:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:12.530 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.530 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 97287 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97287 ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97287 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97287 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97287' 00:23:12.790 killing process with pid 97287 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 97287 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 97287 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 97287 ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 97287 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97287 ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97287 00:23:12.790 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (97287) - No such process 00:23:12.790 Process with pid 97287 is not found 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 97287 is not found' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:12.790 ************************************ 00:23:12.790 END TEST spdkcli_nvmf_tcp 00:23:12.790 ************************************ 00:23:12.790 00:23:12.790 real 0m17.412s 00:23:12.790 user 0m37.467s 00:23:12.790 sys 0m1.009s 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.790 19:51:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.050 19:51:38 -- common/autotest_common.sh@1142 -- # return 0 00:23:13.050 19:51:38 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:13.050 19:51:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:13.050 19:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.050 19:51:38 -- common/autotest_common.sh@10 -- # set +x 00:23:13.050 ************************************ 00:23:13.050 START TEST nvmf_identify_passthru 00:23:13.050 ************************************ 00:23:13.050 19:51:38 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:13.050 * Looking for test storage... 00:23:13.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:13.050 19:51:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.050 19:51:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.050 19:51:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.050 19:51:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.050 19:51:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.050 19:51:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.050 19:51:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.050 19:51:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:13.050 19:51:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.050 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.051 19:51:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.051 19:51:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.051 19:51:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.051 19:51:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.051 19:51:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.051 19:51:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.051 19:51:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.051 19:51:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:13.051 19:51:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.051 19:51:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.051 19:51:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:13.051 19:51:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:13.051 Cannot find device "nvmf_tgt_br" 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:13.051 Cannot find device "nvmf_tgt_br2" 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:13.051 Cannot find device "nvmf_tgt_br" 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:13.051 Cannot find device "nvmf_tgt_br2" 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:23:13.051 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:13.312 19:51:38 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:13.312 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:13.312 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:13.312 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:13.312 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:13.312 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:13.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:23:13.313 00:23:13.313 --- 10.0.0.2 ping statistics --- 00:23:13.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.313 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:13.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:13.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:23:13.313 00:23:13.313 --- 10.0.0.3 ping statistics --- 00:23:13.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.313 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:13.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:23:13.313 00:23:13.313 --- 10.0.0.1 ping statistics --- 00:23:13.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.313 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.313 19:51:39 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.313 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:23:13.313 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.313 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:13.313 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:23:13.313 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:23:13.313 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:23:13.313 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:23:13.572 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:23:13.572 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:23:13.572 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:23:13.572 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:13.572 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:23:13.573 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:13.573 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:23:13.573 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:13.573 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:13.573 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:23:13.832 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:23:13.832 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:13.832 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:13.832 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97779 00:23:13.832 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:13.832 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.832 19:51:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97779 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97779 ']' 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.832 19:51:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:14.091 [2024-07-15 19:51:39.616969] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:23:14.091 [2024-07-15 19:51:39.617072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.091 [2024-07-15 19:51:39.756897] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.091 [2024-07-15 19:51:39.872248] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.091 [2024-07-15 19:51:39.872488] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.091 [2024-07-15 19:51:39.872524] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.091 [2024-07-15 19:51:39.872533] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.091 [2024-07-15 19:51:39.872542] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.091 [2024-07-15 19:51:39.872673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.091 [2024-07-15 19:51:39.872828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.349 [2024-07-15 19:51:39.872981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.349 [2024-07-15 19:51:39.872930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:23:14.916 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.916 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:14.916 [2024-07-15 19:51:40.689918] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.916 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.916 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.175 [2024-07-15 19:51:40.704205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.175 Nvme0n1 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.175 [2024-07-15 19:51:40.846910] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.175 [ 00:23:15.175 { 00:23:15.175 "allow_any_host": true, 00:23:15.175 "hosts": [], 00:23:15.175 "listen_addresses": [], 00:23:15.175 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:15.175 "subtype": "Discovery" 00:23:15.175 }, 00:23:15.175 { 00:23:15.175 "allow_any_host": true, 00:23:15.175 "hosts": [], 00:23:15.175 "listen_addresses": [ 00:23:15.175 { 00:23:15.175 "adrfam": "IPv4", 00:23:15.175 "traddr": "10.0.0.2", 00:23:15.175 "trsvcid": "4420", 00:23:15.175 "trtype": "TCP" 00:23:15.175 } 00:23:15.175 ], 00:23:15.175 "max_cntlid": 65519, 00:23:15.175 "max_namespaces": 1, 00:23:15.175 "min_cntlid": 1, 00:23:15.175 "model_number": "SPDK bdev Controller", 00:23:15.175 "namespaces": [ 00:23:15.175 { 00:23:15.175 "bdev_name": "Nvme0n1", 00:23:15.175 "name": "Nvme0n1", 00:23:15.175 "nguid": "EA94A01512E64A1E8EF0D901A15BFC2E", 00:23:15.175 "nsid": 1, 00:23:15.175 "uuid": "ea94a015-12e6-4a1e-8ef0-d901a15bfc2e" 00:23:15.175 } 00:23:15.175 ], 00:23:15.175 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.175 "serial_number": "SPDK00000000000001", 00:23:15.175 "subtype": "NVMe" 00:23:15.175 } 00:23:15.175 ] 00:23:15.175 19:51:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:15.175 19:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:15.434 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:23:15.434 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:15.434 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:15.434 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:15.692 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:23:15.692 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:23:15.692 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:23:15.692 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.692 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.692 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.693 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:15.693 19:51:41 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.693 rmmod nvme_tcp 00:23:15.693 rmmod nvme_fabrics 00:23:15.693 rmmod nvme_keyring 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97779 ']' 00:23:15.693 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97779 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97779 ']' 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97779 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97779 00:23:15.693 killing process with pid 97779 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97779' 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97779 00:23:15.693 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97779 00:23:15.951 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.951 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.951 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.951 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.951 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.951 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.951 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:15.951 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.951 19:51:41 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:15.951 ************************************ 00:23:15.951 END TEST nvmf_identify_passthru 00:23:15.951 ************************************ 00:23:15.951 00:23:15.951 real 0m3.089s 00:23:15.951 user 0m7.522s 00:23:15.951 sys 0m0.845s 00:23:15.951 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.951 19:51:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:16.210 19:51:41 -- common/autotest_common.sh@1142 -- # return 0 00:23:16.210 19:51:41 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:16.210 19:51:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:16.210 19:51:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.210 19:51:41 -- common/autotest_common.sh@10 -- # set +x 00:23:16.210 ************************************ 00:23:16.210 START TEST nvmf_dif 00:23:16.210 ************************************ 00:23:16.210 19:51:41 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:16.210 * Looking for test storage... 00:23:16.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:16.210 19:51:41 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:16.210 19:51:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:16.210 19:51:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.210 19:51:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.210 19:51:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:16.211 19:51:41 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.211 19:51:41 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.211 19:51:41 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.211 19:51:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.211 19:51:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.211 19:51:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.211 19:51:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:16.211 19:51:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.211 19:51:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:16.211 19:51:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:16.211 19:51:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:16.211 19:51:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:16.211 19:51:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.211 19:51:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:16.211 19:51:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:16.211 Cannot find device "nvmf_tgt_br" 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@155 -- # true 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:16.211 Cannot find device "nvmf_tgt_br2" 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@156 -- # true 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:16.211 Cannot find device "nvmf_tgt_br" 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@158 -- # true 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:16.211 Cannot find device "nvmf_tgt_br2" 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@159 -- # true 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:16.211 19:51:41 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:16.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:16.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:16.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:16.470 00:23:16.470 --- 10.0.0.2 ping statistics --- 00:23:16.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.470 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:16.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:16.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:16.470 00:23:16.470 --- 10.0.0.3 ping statistics --- 00:23:16.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.470 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:16.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:23:16.470 00:23:16.470 --- 10.0.0.1 ping statistics --- 00:23:16.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.470 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:16.470 19:51:42 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:16.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:16.983 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:16.983 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.983 19:51:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:16.983 19:51:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=98125 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:16.983 19:51:42 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 98125 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 98125 ']' 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.983 19:51:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:16.983 [2024-07-15 19:51:42.656243] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:23:16.983 [2024-07-15 19:51:42.656394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.241 [2024-07-15 19:51:42.797129] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.241 [2024-07-15 19:51:42.910505] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.241 [2024-07-15 19:51:42.910582] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.241 [2024-07-15 19:51:42.910608] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.241 [2024-07-15 19:51:42.910619] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.241 [2024-07-15 19:51:42.910628] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.241 [2024-07-15 19:51:42.910659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:23:18.178 19:51:43 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:18.178 19:51:43 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.178 19:51:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:18.178 19:51:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:18.178 [2024-07-15 19:51:43.704235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.178 19:51:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.178 19:51:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:18.178 ************************************ 00:23:18.178 START TEST fio_dif_1_default 00:23:18.178 ************************************ 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:18.178 bdev_null0 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:18.178 [2024-07-15 19:51:43.752305] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.178 { 00:23:18.178 "params": { 00:23:18.178 "name": "Nvme$subsystem", 00:23:18.178 "trtype": "$TEST_TRANSPORT", 00:23:18.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.178 "adrfam": "ipv4", 00:23:18.178 "trsvcid": "$NVMF_PORT", 00:23:18.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.178 "hdgst": ${hdgst:-false}, 00:23:18.178 "ddgst": ${ddgst:-false} 00:23:18.178 }, 00:23:18.178 "method": "bdev_nvme_attach_controller" 00:23:18.178 } 00:23:18.178 EOF 00:23:18.178 )") 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:18.178 "params": { 00:23:18.178 "name": "Nvme0", 00:23:18.178 "trtype": "tcp", 00:23:18.178 "traddr": "10.0.0.2", 00:23:18.178 "adrfam": "ipv4", 00:23:18.178 "trsvcid": "4420", 00:23:18.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:18.178 "hdgst": false, 00:23:18.178 "ddgst": false 00:23:18.178 }, 00:23:18.178 "method": "bdev_nvme_attach_controller" 00:23:18.178 }' 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:18.178 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:18.179 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:18.179 19:51:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.437 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:18.437 fio-3.35 00:23:18.437 Starting 1 thread 00:23:30.655 00:23:30.655 filename0: (groupid=0, jobs=1): err= 0: pid=98205: Mon Jul 15 19:51:54 2024 00:23:30.655 read: IOPS=3457, BW=13.5MiB/s (14.2MB/s)(135MiB/10021msec) 00:23:30.655 slat (usec): min=6, max=222, avg= 8.13, stdev= 3.58 00:23:30.655 clat (usec): min=372, max=41609, avg=1133.11, stdev=5213.91 00:23:30.655 lat (usec): min=378, max=41618, avg=1141.24, stdev=5214.00 00:23:30.656 clat percentiles (usec): 00:23:30.656 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 420], 00:23:30.656 | 30.00th=[ 429], 40.00th=[ 437], 50.00th=[ 445], 60.00th=[ 457], 00:23:30.656 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 502], 95.00th=[ 523], 00:23:30.656 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:23:30.656 | 99.99th=[41681] 00:23:30.656 bw ( KiB/s): min= 9696, max=19648, per=100.00%, avg=13856.00, stdev=3104.10, samples=20 00:23:30.656 iops : min= 2424, max= 4912, avg=3464.00, stdev=776.02, samples=20 00:23:30.656 lat (usec) : 500=89.83%, 750=8.44%, 1000=0.01% 00:23:30.656 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=1.69% 00:23:30.656 cpu : usr=88.75%, sys=9.55%, ctx=30, majf=0, minf=9 00:23:30.656 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.656 issued rwts: total=34644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.656 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:30.656 00:23:30.656 Run status group 0 (all jobs): 00:23:30.656 READ: bw=13.5MiB/s (14.2MB/s), 13.5MiB/s-13.5MiB/s (14.2MB/s-14.2MB/s), io=135MiB (142MB), run=10021-10021msec 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 ************************************ 00:23:30.656 END TEST fio_dif_1_default 00:23:30.656 ************************************ 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 00:23:30.656 real 0m11.031s 00:23:30.656 user 0m9.545s 00:23:30.656 sys 0m1.232s 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 19:51:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:30.656 19:51:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:30.656 19:51:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:30.656 19:51:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 ************************************ 00:23:30.656 START TEST fio_dif_1_multi_subsystems 00:23:30.656 ************************************ 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 bdev_null0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 [2024-07-15 19:51:54.839225] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 bdev_null1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.656 { 00:23:30.656 "params": { 00:23:30.656 "name": "Nvme$subsystem", 00:23:30.656 "trtype": "$TEST_TRANSPORT", 00:23:30.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.656 "adrfam": "ipv4", 00:23:30.656 "trsvcid": "$NVMF_PORT", 00:23:30.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.656 "hdgst": ${hdgst:-false}, 00:23:30.656 "ddgst": ${ddgst:-false} 00:23:30.656 }, 00:23:30.656 "method": "bdev_nvme_attach_controller" 00:23:30.656 } 00:23:30.656 EOF 00:23:30.656 )") 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.656 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.656 { 00:23:30.656 "params": { 00:23:30.656 "name": "Nvme$subsystem", 00:23:30.656 "trtype": "$TEST_TRANSPORT", 00:23:30.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.656 "adrfam": "ipv4", 00:23:30.656 "trsvcid": "$NVMF_PORT", 00:23:30.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.657 "hdgst": ${hdgst:-false}, 00:23:30.657 "ddgst": ${ddgst:-false} 00:23:30.657 }, 00:23:30.657 "method": "bdev_nvme_attach_controller" 00:23:30.657 } 00:23:30.657 EOF 00:23:30.657 )") 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.657 "params": { 00:23:30.657 "name": "Nvme0", 00:23:30.657 "trtype": "tcp", 00:23:30.657 "traddr": "10.0.0.2", 00:23:30.657 "adrfam": "ipv4", 00:23:30.657 "trsvcid": "4420", 00:23:30.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.657 "hdgst": false, 00:23:30.657 "ddgst": false 00:23:30.657 }, 00:23:30.657 "method": "bdev_nvme_attach_controller" 00:23:30.657 },{ 00:23:30.657 "params": { 00:23:30.657 "name": "Nvme1", 00:23:30.657 "trtype": "tcp", 00:23:30.657 "traddr": "10.0.0.2", 00:23:30.657 "adrfam": "ipv4", 00:23:30.657 "trsvcid": "4420", 00:23:30.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.657 "hdgst": false, 00:23:30.657 "ddgst": false 00:23:30.657 }, 00:23:30.657 "method": "bdev_nvme_attach_controller" 00:23:30.657 }' 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:30.657 19:51:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:30.657 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:30.657 fio-3.35 00:23:30.657 Starting 2 threads 00:23:40.658 00:23:40.658 filename0: (groupid=0, jobs=1): err= 0: pid=98364: Mon Jul 15 19:52:05 2024 00:23:40.658 read: IOPS=219, BW=878KiB/s (899kB/s)(8800KiB/10026msec) 00:23:40.658 slat (usec): min=6, max=100, avg= 8.77, stdev= 4.32 00:23:40.658 clat (usec): min=395, max=42035, avg=18201.06, stdev=20079.86 00:23:40.658 lat (usec): min=402, max=42046, avg=18209.83, stdev=20079.88 00:23:40.658 clat percentiles (usec): 00:23:40.658 | 1.00th=[ 412], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 453], 00:23:40.658 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 506], 60.00th=[40633], 00:23:40.658 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:40.658 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:40.658 | 99.99th=[42206] 00:23:40.658 bw ( KiB/s): min= 544, max= 1184, per=49.81%, avg=878.40, stdev=184.40, samples=20 00:23:40.658 iops : min= 136, max= 296, avg=219.60, stdev=46.10, samples=20 00:23:40.658 lat (usec) : 500=48.77%, 750=6.91%, 1000=0.32% 00:23:40.658 lat (msec) : 2=0.18%, 50=43.82% 00:23:40.658 cpu : usr=94.59%, sys=4.78%, ctx=90, majf=0, minf=0 00:23:40.658 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:40.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.658 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.658 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:40.658 filename1: (groupid=0, jobs=1): err= 0: pid=98365: Mon Jul 15 19:52:05 2024 00:23:40.658 read: IOPS=221, BW=885KiB/s (907kB/s)(8880KiB/10030msec) 00:23:40.658 slat (nsec): min=6610, max=37735, avg=8784.78, stdev=3606.14 00:23:40.658 clat (usec): min=397, max=42515, avg=18043.86, stdev=20058.59 00:23:40.658 lat (usec): min=403, max=42546, avg=18052.64, stdev=20058.57 00:23:40.658 clat percentiles (usec): 00:23:40.658 | 1.00th=[ 408], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 453], 00:23:40.658 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 506], 60.00th=[40633], 00:23:40.658 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:23:40.658 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:23:40.658 | 99.99th=[42730] 00:23:40.658 bw ( KiB/s): min= 544, max= 1280, per=50.26%, avg=886.40, stdev=240.39, samples=20 00:23:40.658 iops : min= 136, max= 320, avg=221.60, stdev=60.10, samples=20 00:23:40.658 lat (usec) : 500=47.97%, 750=7.88%, 1000=0.54% 00:23:40.658 lat (msec) : 2=0.18%, 50=43.42% 00:23:40.658 cpu : usr=95.03%, sys=4.57%, ctx=12, majf=0, minf=0 00:23:40.658 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:40.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.658 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.658 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:40.658 00:23:40.658 Run status group 0 (all jobs): 00:23:40.658 READ: bw=1763KiB/s (1805kB/s), 878KiB/s-885KiB/s (899kB/s-907kB/s), io=17.3MiB (18.1MB), run=10026-10030msec 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:40.658 ************************************ 00:23:40.658 END TEST fio_dif_1_multi_subsystems 00:23:40.658 ************************************ 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.658 00:23:40.658 real 0m11.161s 00:23:40.658 user 0m19.745s 00:23:40.658 sys 0m1.211s 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.658 19:52:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:40.658 19:52:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:40.658 19:52:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:40.658 19:52:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:40.658 19:52:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:40.658 19:52:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.658 ************************************ 00:23:40.658 START TEST fio_dif_rand_params 00:23:40.658 ************************************ 00:23:40.658 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:40.658 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:40.658 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:40.658 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.659 bdev_null0 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.659 [2024-07-15 19:52:06.056681] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.659 { 00:23:40.659 "params": { 00:23:40.659 "name": "Nvme$subsystem", 00:23:40.659 "trtype": "$TEST_TRANSPORT", 00:23:40.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.659 "adrfam": "ipv4", 00:23:40.659 "trsvcid": "$NVMF_PORT", 00:23:40.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.659 "hdgst": ${hdgst:-false}, 00:23:40.659 "ddgst": ${ddgst:-false} 00:23:40.659 }, 00:23:40.659 "method": "bdev_nvme_attach_controller" 00:23:40.659 } 00:23:40.659 EOF 00:23:40.659 )") 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:40.659 "params": { 00:23:40.659 "name": "Nvme0", 00:23:40.659 "trtype": "tcp", 00:23:40.659 "traddr": "10.0.0.2", 00:23:40.659 "adrfam": "ipv4", 00:23:40.659 "trsvcid": "4420", 00:23:40.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.659 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:40.659 "hdgst": false, 00:23:40.659 "ddgst": false 00:23:40.659 }, 00:23:40.659 "method": "bdev_nvme_attach_controller" 00:23:40.659 }' 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:40.659 19:52:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.659 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:40.659 ... 00:23:40.659 fio-3.35 00:23:40.659 Starting 3 threads 00:23:47.268 00:23:47.268 filename0: (groupid=0, jobs=1): err= 0: pid=98521: Mon Jul 15 19:52:11 2024 00:23:47.268 read: IOPS=249, BW=31.1MiB/s (32.6MB/s)(156MiB/5004msec) 00:23:47.268 slat (nsec): min=6842, max=53860, avg=11788.16, stdev=4989.35 00:23:47.268 clat (usec): min=6071, max=53208, avg=12028.01, stdev=3670.42 00:23:47.268 lat (usec): min=6082, max=53247, avg=12039.80, stdev=3671.13 00:23:47.268 clat percentiles (usec): 00:23:47.268 | 1.00th=[ 7308], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11076], 00:23:47.268 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:23:47.268 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[13173], 00:23:47.268 | 99.00th=[14222], 99.50th=[52691], 99.90th=[52691], 99.95th=[53216], 00:23:47.268 | 99.99th=[53216] 00:23:47.268 bw ( KiB/s): min=28672, max=35187, per=34.07%, avg=31671.44, stdev=2224.69, samples=9 00:23:47.268 iops : min= 224, max= 274, avg=247.33, stdev=17.20, samples=9 00:23:47.268 lat (msec) : 10=5.54%, 20=93.74%, 100=0.72% 00:23:47.268 cpu : usr=92.52%, sys=6.12%, ctx=7, majf=0, minf=9 00:23:47.268 IO depths : 1=9.3%, 2=90.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:47.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.268 issued rwts: total=1246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.268 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:47.269 filename0: (groupid=0, jobs=1): err= 0: pid=98522: Mon Jul 15 19:52:11 2024 00:23:47.269 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(129MiB/5004msec) 00:23:47.269 slat (nsec): min=6837, max=68804, avg=10659.04, stdev=5558.23 00:23:47.269 clat (usec): min=8216, max=17619, avg=14519.31, stdev=1738.35 00:23:47.269 lat (usec): min=8243, max=17633, avg=14529.97, stdev=1737.90 00:23:47.269 clat percentiles (usec): 00:23:47.269 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[13435], 20.00th=[14091], 00:23:47.269 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:23:47.269 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16057], 95.00th=[16319], 00:23:47.269 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:23:47.269 | 99.99th=[17695] 00:23:47.269 bw ( KiB/s): min=25344, max=28302, per=28.27%, avg=26275.78, stdev=802.12, samples=9 00:23:47.269 iops : min= 198, max= 221, avg=205.22, stdev= 6.24, samples=9 00:23:47.269 lat (msec) : 10=6.69%, 20=93.31% 00:23:47.269 cpu : usr=92.34%, sys=6.36%, ctx=28, majf=0, minf=9 00:23:47.269 IO depths : 1=33.2%, 2=66.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:47.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.269 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:47.269 filename0: (groupid=0, jobs=1): err= 0: pid=98523: Mon Jul 15 19:52:11 2024 00:23:47.269 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(170MiB/5007msec) 00:23:47.269 slat (nsec): min=6699, max=46470, avg=11978.11, stdev=4212.70 00:23:47.269 clat (usec): min=5887, max=53940, avg=11040.09, stdev=3903.26 00:23:47.269 lat (usec): min=5897, max=53952, avg=11052.07, stdev=3903.22 00:23:47.269 clat percentiles (usec): 00:23:47.269 | 1.00th=[ 6849], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10028], 00:23:47.269 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:23:47.269 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:23:47.269 | 99.00th=[14746], 99.50th=[51119], 99.90th=[52167], 99.95th=[53740], 00:23:47.269 | 99.99th=[53740] 00:23:47.269 bw ( KiB/s): min=29832, max=36864, per=37.51%, avg=34863.56, stdev=2464.97, samples=9 00:23:47.269 iops : min= 233, max= 288, avg=272.33, stdev=19.25, samples=9 00:23:47.269 lat (msec) : 10=18.04%, 20=81.08%, 50=0.22%, 100=0.66% 00:23:47.269 cpu : usr=92.45%, sys=6.13%, ctx=11, majf=0, minf=0 00:23:47.269 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:47.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.269 issued rwts: total=1358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.269 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:47.269 00:23:47.269 Run status group 0 (all jobs): 00:23:47.269 READ: bw=90.8MiB/s (95.2MB/s), 25.8MiB/s-33.9MiB/s (27.0MB/s-35.5MB/s), io=455MiB (477MB), run=5004-5007msec 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 bdev_null0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 [2024-07-15 19:52:12.073478] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 bdev_null1 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 bdev_null2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:47.269 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:47.269 { 00:23:47.269 "params": { 00:23:47.269 "name": "Nvme$subsystem", 00:23:47.270 "trtype": "$TEST_TRANSPORT", 00:23:47.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.270 "adrfam": "ipv4", 00:23:47.270 "trsvcid": "$NVMF_PORT", 00:23:47.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.270 "hdgst": ${hdgst:-false}, 00:23:47.270 "ddgst": ${ddgst:-false} 00:23:47.270 }, 00:23:47.270 "method": "bdev_nvme_attach_controller" 00:23:47.270 } 00:23:47.270 EOF 00:23:47.270 )") 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:47.270 { 00:23:47.270 "params": { 00:23:47.270 "name": "Nvme$subsystem", 00:23:47.270 "trtype": "$TEST_TRANSPORT", 00:23:47.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.270 "adrfam": "ipv4", 00:23:47.270 "trsvcid": "$NVMF_PORT", 00:23:47.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.270 "hdgst": ${hdgst:-false}, 00:23:47.270 "ddgst": ${ddgst:-false} 00:23:47.270 }, 00:23:47.270 "method": "bdev_nvme_attach_controller" 00:23:47.270 } 00:23:47.270 EOF 00:23:47.270 )") 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:47.270 { 00:23:47.270 "params": { 00:23:47.270 "name": "Nvme$subsystem", 00:23:47.270 "trtype": "$TEST_TRANSPORT", 00:23:47.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.270 "adrfam": "ipv4", 00:23:47.270 "trsvcid": "$NVMF_PORT", 00:23:47.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.270 "hdgst": ${hdgst:-false}, 00:23:47.270 "ddgst": ${ddgst:-false} 00:23:47.270 }, 00:23:47.270 "method": "bdev_nvme_attach_controller" 00:23:47.270 } 00:23:47.270 EOF 00:23:47.270 )") 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:47.270 "params": { 00:23:47.270 "name": "Nvme0", 00:23:47.270 "trtype": "tcp", 00:23:47.270 "traddr": "10.0.0.2", 00:23:47.270 "adrfam": "ipv4", 00:23:47.270 "trsvcid": "4420", 00:23:47.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:47.270 "hdgst": false, 00:23:47.270 "ddgst": false 00:23:47.270 }, 00:23:47.270 "method": "bdev_nvme_attach_controller" 00:23:47.270 },{ 00:23:47.270 "params": { 00:23:47.270 "name": "Nvme1", 00:23:47.270 "trtype": "tcp", 00:23:47.270 "traddr": "10.0.0.2", 00:23:47.270 "adrfam": "ipv4", 00:23:47.270 "trsvcid": "4420", 00:23:47.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.270 "hdgst": false, 00:23:47.270 "ddgst": false 00:23:47.270 }, 00:23:47.270 "method": "bdev_nvme_attach_controller" 00:23:47.270 },{ 00:23:47.270 "params": { 00:23:47.270 "name": "Nvme2", 00:23:47.270 "trtype": "tcp", 00:23:47.270 "traddr": "10.0.0.2", 00:23:47.270 "adrfam": "ipv4", 00:23:47.270 "trsvcid": "4420", 00:23:47.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:47.270 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:47.270 "hdgst": false, 00:23:47.270 "ddgst": false 00:23:47.270 }, 00:23:47.270 "method": "bdev_nvme_attach_controller" 00:23:47.270 }' 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:47.270 19:52:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:47.270 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:47.270 ... 00:23:47.270 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:47.270 ... 00:23:47.270 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:47.270 ... 00:23:47.270 fio-3.35 00:23:47.270 Starting 24 threads 00:23:59.513 00:23:59.513 filename0: (groupid=0, jobs=1): err= 0: pid=98620: Mon Jul 15 19:52:23 2024 00:23:59.513 read: IOPS=237, BW=949KiB/s (972kB/s)(9516KiB/10030msec) 00:23:59.513 slat (usec): min=7, max=4018, avg=15.00, stdev=127.14 00:23:59.513 clat (msec): min=16, max=133, avg=67.24, stdev=18.93 00:23:59.513 lat (msec): min=16, max=133, avg=67.25, stdev=18.93 00:23:59.513 clat percentiles (msec): 00:23:59.513 | 1.00th=[ 26], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 48], 00:23:59.513 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:23:59.513 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 99], 00:23:59.513 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 134], 99.95th=[ 134], 00:23:59.513 | 99.99th=[ 134] 00:23:59.513 bw ( KiB/s): min= 736, max= 1200, per=4.27%, avg=948.70, stdev=140.48, samples=20 00:23:59.513 iops : min= 184, max= 300, avg=237.15, stdev=35.08, samples=20 00:23:59.513 lat (msec) : 20=0.29%, 50=23.54%, 100=72.47%, 250=3.70% 00:23:59.513 cpu : usr=39.74%, sys=1.17%, ctx=1094, majf=0, minf=9 00:23:59.513 IO depths : 1=1.3%, 2=2.8%, 4=10.7%, 8=73.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:23:59.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.513 filename0: (groupid=0, jobs=1): err= 0: pid=98621: Mon Jul 15 19:52:23 2024 00:23:59.513 read: IOPS=263, BW=1055KiB/s (1080kB/s)(10.3MiB/10032msec) 00:23:59.513 slat (usec): min=4, max=8021, avg=16.83, stdev=199.64 00:23:59.513 clat (msec): min=20, max=131, avg=60.56, stdev=17.49 00:23:59.513 lat (msec): min=20, max=131, avg=60.57, stdev=17.49 00:23:59.513 clat percentiles (msec): 00:23:59.513 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 48], 00:23:59.513 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 62], 00:23:59.513 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 95], 00:23:59.513 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 132], 99.95th=[ 132], 00:23:59.513 | 99.99th=[ 132] 00:23:59.513 bw ( KiB/s): min= 824, max= 1200, per=4.73%, avg=1051.30, stdev=103.28, samples=20 00:23:59.513 iops : min= 206, max= 300, avg=262.80, stdev=25.86, samples=20 00:23:59.513 lat (msec) : 50=35.71%, 100=62.40%, 250=1.89% 00:23:59.513 cpu : usr=40.78%, sys=1.18%, ctx=1183, majf=0, minf=10 00:23:59.513 IO depths : 1=0.5%, 2=1.0%, 4=6.5%, 8=78.8%, 16=13.2%, 32=0.0%, >=64=0.0% 00:23:59.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 complete : 0=0.0%, 4=89.2%, 8=6.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 issued rwts: total=2646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.513 filename0: (groupid=0, jobs=1): err= 0: pid=98622: Mon Jul 15 19:52:23 2024 00:23:59.513 read: IOPS=206, BW=825KiB/s (845kB/s)(8256KiB/10002msec) 00:23:59.513 slat (usec): min=3, max=8024, avg=21.70, stdev=305.25 00:23:59.513 clat (usec): min=1584, max=172619, avg=77405.88, stdev=24364.67 00:23:59.513 lat (usec): min=1592, max=172630, avg=77427.58, stdev=24363.76 00:23:59.513 clat percentiles (msec): 00:23:59.513 | 1.00th=[ 9], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 61], 00:23:59.513 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:23:59.513 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:23:59.513 | 99.00th=[ 153], 99.50th=[ 174], 99.90th=[ 174], 99.95th=[ 174], 00:23:59.513 | 99.99th=[ 174] 00:23:59.513 bw ( KiB/s): min= 640, max= 1024, per=3.62%, avg=803.79, stdev=99.97, samples=19 00:23:59.513 iops : min= 160, max= 256, avg=200.95, stdev=24.99, samples=19 00:23:59.513 lat (msec) : 2=0.53%, 10=0.78%, 20=0.78%, 50=9.79%, 100=76.41% 00:23:59.513 lat (msec) : 250=11.72% 00:23:59.513 cpu : usr=32.24%, sys=0.91%, ctx=833, majf=0, minf=9 00:23:59.513 IO depths : 1=2.1%, 2=4.8%, 4=14.0%, 8=67.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:59.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 complete : 0=0.0%, 4=91.1%, 8=4.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.513 filename0: (groupid=0, jobs=1): err= 0: pid=98623: Mon Jul 15 19:52:23 2024 00:23:59.513 read: IOPS=255, BW=1023KiB/s (1048kB/s)(10.0MiB/10029msec) 00:23:59.513 slat (usec): min=7, max=8022, avg=13.15, stdev=158.24 00:23:59.513 clat (msec): min=17, max=141, avg=62.43, stdev=19.48 00:23:59.513 lat (msec): min=17, max=141, avg=62.44, stdev=19.49 00:23:59.513 clat percentiles (msec): 00:23:59.513 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:23:59.513 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 65], 00:23:59.513 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 91], 95.00th=[ 97], 00:23:59.513 | 99.00th=[ 115], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 142], 00:23:59.513 | 99.99th=[ 142] 00:23:59.513 bw ( KiB/s): min= 688, max= 1584, per=4.60%, avg=1021.25, stdev=198.86, samples=20 00:23:59.513 iops : min= 172, max= 396, avg=255.30, stdev=49.71, samples=20 00:23:59.513 lat (msec) : 20=0.47%, 50=33.33%, 100=62.92%, 250=3.27% 00:23:59.513 cpu : usr=41.45%, sys=1.09%, ctx=1205, majf=0, minf=9 00:23:59.513 IO depths : 1=1.0%, 2=2.1%, 4=7.9%, 8=76.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:23:59.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 issued rwts: total=2565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.513 filename0: (groupid=0, jobs=1): err= 0: pid=98625: Mon Jul 15 19:52:23 2024 00:23:59.513 read: IOPS=210, BW=842KiB/s (862kB/s)(8420KiB/10005msec) 00:23:59.513 slat (usec): min=3, max=8020, avg=14.12, stdev=174.62 00:23:59.513 clat (msec): min=6, max=144, avg=75.90, stdev=21.31 00:23:59.513 lat (msec): min=6, max=144, avg=75.92, stdev=21.31 00:23:59.513 clat percentiles (msec): 00:23:59.513 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:23:59.513 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:23:59.513 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 109], 00:23:59.513 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:23:59.513 | 99.99th=[ 144] 00:23:59.513 bw ( KiB/s): min= 640, max= 1120, per=3.75%, avg=832.42, stdev=110.03, samples=19 00:23:59.513 iops : min= 160, max= 280, avg=208.11, stdev=27.51, samples=19 00:23:59.513 lat (msec) : 10=0.76%, 20=0.76%, 50=9.79%, 100=77.43%, 250=11.26% 00:23:59.513 cpu : usr=32.00%, sys=1.14%, ctx=841, majf=0, minf=9 00:23:59.513 IO depths : 1=1.1%, 2=3.1%, 4=11.3%, 8=72.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:23:59.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 complete : 0=0.0%, 4=90.6%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.513 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.513 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.513 filename0: (groupid=0, jobs=1): err= 0: pid=98626: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=238, BW=955KiB/s (978kB/s)(9572KiB/10026msec) 00:23:59.514 slat (usec): min=7, max=4028, avg=15.53, stdev=142.07 00:23:59.514 clat (msec): min=24, max=141, avg=66.94, stdev=19.97 00:23:59.514 lat (msec): min=24, max=141, avg=66.95, stdev=19.97 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:23:59.514 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:23:59.514 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:23:59.514 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 142], 99.95th=[ 142], 00:23:59.514 | 99.99th=[ 142] 00:23:59.514 bw ( KiB/s): min= 768, max= 1152, per=4.28%, avg=950.55, stdev=120.27, samples=20 00:23:59.514 iops : min= 192, max= 288, avg=237.60, stdev=30.11, samples=20 00:23:59.514 lat (msec) : 50=24.82%, 100=67.86%, 250=7.31% 00:23:59.514 cpu : usr=40.53%, sys=1.39%, ctx=1392, majf=0, minf=9 00:23:59.514 IO depths : 1=1.3%, 2=2.7%, 4=9.9%, 8=74.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename0: (groupid=0, jobs=1): err= 0: pid=98627: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=259, BW=1038KiB/s (1063kB/s)(10.2MiB/10028msec) 00:23:59.514 slat (nsec): min=7464, max=48061, avg=10549.66, stdev=3718.62 00:23:59.514 clat (msec): min=21, max=160, avg=61.58, stdev=17.86 00:23:59.514 lat (msec): min=21, max=160, avg=61.59, stdev=17.86 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 23], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 47], 00:23:59.514 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 64], 00:23:59.514 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 94], 00:23:59.514 | 99.00th=[ 115], 99.50th=[ 136], 99.90th=[ 161], 99.95th=[ 161], 00:23:59.514 | 99.99th=[ 161] 00:23:59.514 bw ( KiB/s): min= 768, max= 1200, per=4.66%, avg=1034.05, stdev=106.85, samples=20 00:23:59.514 iops : min= 192, max= 300, avg=258.50, stdev=26.71, samples=20 00:23:59.514 lat (msec) : 50=31.67%, 100=65.95%, 250=2.38% 00:23:59.514 cpu : usr=40.98%, sys=1.12%, ctx=1380, majf=0, minf=9 00:23:59.514 IO depths : 1=0.8%, 2=2.0%, 4=9.1%, 8=75.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename0: (groupid=0, jobs=1): err= 0: pid=98628: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=201, BW=805KiB/s (824kB/s)(8052KiB/10007msec) 00:23:59.514 slat (usec): min=4, max=8021, avg=14.66, stdev=178.59 00:23:59.514 clat (msec): min=12, max=156, avg=79.42, stdev=23.69 00:23:59.514 lat (msec): min=12, max=156, avg=79.44, stdev=23.69 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:23:59.514 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:23:59.514 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 124], 00:23:59.514 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:23:59.514 | 99.99th=[ 157] 00:23:59.514 bw ( KiB/s): min= 640, max= 1120, per=3.59%, avg=797.95, stdev=99.89, samples=19 00:23:59.514 iops : min= 160, max= 280, avg=199.47, stdev=24.98, samples=19 00:23:59.514 lat (msec) : 20=0.55%, 50=12.22%, 100=71.29%, 250=15.95% 00:23:59.514 cpu : usr=31.95%, sys=1.05%, ctx=939, majf=0, minf=9 00:23:59.514 IO depths : 1=2.1%, 2=4.3%, 4=12.5%, 8=70.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename1: (groupid=0, jobs=1): err= 0: pid=98631: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=263, BW=1053KiB/s (1078kB/s)(10.3MiB/10055msec) 00:23:59.514 slat (usec): min=7, max=9022, avg=16.36, stdev=201.47 00:23:59.514 clat (usec): min=1632, max=126974, avg=60674.94, stdev=22769.76 00:23:59.514 lat (usec): min=1641, max=126989, avg=60691.30, stdev=22767.52 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 3], 5.00th=[ 23], 10.00th=[ 40], 20.00th=[ 46], 00:23:59.514 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 64], 00:23:59.514 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 97], 00:23:59.514 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 128], 99.95th=[ 128], 00:23:59.514 | 99.99th=[ 128] 00:23:59.514 bw ( KiB/s): min= 776, max= 1908, per=4.74%, avg=1052.00, stdev=235.38, samples=20 00:23:59.514 iops : min= 194, max= 477, avg=262.85, stdev=58.84, samples=20 00:23:59.514 lat (msec) : 2=0.60%, 4=2.08%, 10=1.55%, 50=31.58%, 100=60.63% 00:23:59.514 lat (msec) : 250=3.55% 00:23:59.514 cpu : usr=42.05%, sys=1.16%, ctx=1279, majf=0, minf=0 00:23:59.514 IO depths : 1=1.1%, 2=2.7%, 4=10.1%, 8=73.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename1: (groupid=0, jobs=1): err= 0: pid=98632: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=236, BW=946KiB/s (969kB/s)(9500KiB/10041msec) 00:23:59.514 slat (usec): min=5, max=4015, avg=12.23, stdev=82.28 00:23:59.514 clat (msec): min=22, max=145, avg=67.52, stdev=20.15 00:23:59.514 lat (msec): min=22, max=145, avg=67.53, stdev=20.15 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:23:59.514 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:23:59.514 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 103], 00:23:59.514 | 99.00th=[ 126], 99.50th=[ 126], 99.90th=[ 146], 99.95th=[ 146], 00:23:59.514 | 99.99th=[ 146] 00:23:59.514 bw ( KiB/s): min= 728, max= 1152, per=4.25%, avg=943.20, stdev=129.50, samples=20 00:23:59.514 iops : min= 182, max= 288, avg=235.75, stdev=32.41, samples=20 00:23:59.514 lat (msec) : 50=24.46%, 100=70.11%, 250=5.43% 00:23:59.514 cpu : usr=41.23%, sys=1.14%, ctx=1304, majf=0, minf=9 00:23:59.514 IO depths : 1=1.4%, 2=3.2%, 4=10.5%, 8=72.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename1: (groupid=0, jobs=1): err= 0: pid=98633: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=213, BW=855KiB/s (876kB/s)(8560KiB/10006msec) 00:23:59.514 slat (usec): min=4, max=8021, avg=18.14, stdev=244.86 00:23:59.514 clat (msec): min=5, max=162, avg=74.70, stdev=24.71 00:23:59.514 lat (msec): min=5, max=162, avg=74.71, stdev=24.70 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 8], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:23:59.514 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:23:59.514 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 120], 00:23:59.514 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 163], 99.95th=[ 163], 00:23:59.514 | 99.99th=[ 163] 00:23:59.514 bw ( KiB/s): min= 640, max= 1024, per=3.72%, avg=826.95, stdev=125.57, samples=19 00:23:59.514 iops : min= 160, max= 256, avg=206.74, stdev=31.39, samples=19 00:23:59.514 lat (msec) : 10=1.03%, 20=0.75%, 50=16.17%, 100=69.16%, 250=12.90% 00:23:59.514 cpu : usr=33.70%, sys=1.05%, ctx=922, majf=0, minf=9 00:23:59.514 IO depths : 1=1.6%, 2=3.8%, 4=12.7%, 8=70.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=90.7%, 8=4.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename1: (groupid=0, jobs=1): err= 0: pid=98634: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=263, BW=1055KiB/s (1080kB/s)(10.3MiB/10042msec) 00:23:59.514 slat (usec): min=3, max=4021, avg=16.93, stdev=164.09 00:23:59.514 clat (msec): min=5, max=132, avg=60.56, stdev=18.05 00:23:59.514 lat (msec): min=5, max=132, avg=60.58, stdev=18.05 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 47], 00:23:59.514 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 60], 60.00th=[ 64], 00:23:59.514 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 92], 00:23:59.514 | 99.00th=[ 109], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:23:59.514 | 99.99th=[ 132] 00:23:59.514 bw ( KiB/s): min= 768, max= 1280, per=4.74%, avg=1052.45, stdev=128.24, samples=20 00:23:59.514 iops : min= 192, max= 320, avg=263.10, stdev=32.06, samples=20 00:23:59.514 lat (msec) : 10=1.21%, 50=33.46%, 100=62.58%, 250=2.76% 00:23:59.514 cpu : usr=43.08%, sys=1.13%, ctx=1382, majf=0, minf=9 00:23:59.514 IO depths : 1=0.5%, 2=1.1%, 4=6.2%, 8=78.5%, 16=13.6%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=89.3%, 8=6.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename1: (groupid=0, jobs=1): err= 0: pid=98635: Mon Jul 15 19:52:23 2024 00:23:59.514 read: IOPS=232, BW=930KiB/s (953kB/s)(9344KiB/10044msec) 00:23:59.514 slat (usec): min=7, max=8281, avg=30.98, stdev=407.79 00:23:59.514 clat (msec): min=24, max=132, avg=68.50, stdev=19.78 00:23:59.514 lat (msec): min=24, max=132, avg=68.53, stdev=19.79 00:23:59.514 clat percentiles (msec): 00:23:59.514 | 1.00th=[ 26], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 48], 00:23:59.514 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:23:59.514 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 100], 00:23:59.514 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 133], 99.95th=[ 133], 00:23:59.514 | 99.99th=[ 133] 00:23:59.514 bw ( KiB/s): min= 763, max= 1120, per=4.17%, avg=927.30, stdev=100.45, samples=20 00:23:59.514 iops : min= 190, max= 280, avg=231.75, stdev=25.14, samples=20 00:23:59.514 lat (msec) : 50=23.24%, 100=72.35%, 250=4.41% 00:23:59.514 cpu : usr=32.24%, sys=0.78%, ctx=943, majf=0, minf=9 00:23:59.514 IO depths : 1=0.5%, 2=1.1%, 4=8.0%, 8=77.0%, 16=13.4%, 32=0.0%, >=64=0.0% 00:23:59.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 complete : 0=0.0%, 4=89.5%, 8=6.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.514 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.514 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.514 filename1: (groupid=0, jobs=1): err= 0: pid=98636: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=231, BW=928KiB/s (950kB/s)(9280KiB/10005msec) 00:23:59.515 slat (usec): min=6, max=8019, avg=13.93, stdev=166.32 00:23:59.515 clat (msec): min=22, max=142, avg=68.89, stdev=19.51 00:23:59.515 lat (msec): min=22, max=142, avg=68.91, stdev=19.51 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 51], 00:23:59.515 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:23:59.515 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 105], 00:23:59.515 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 142], 00:23:59.515 | 99.99th=[ 142] 00:23:59.515 bw ( KiB/s): min= 768, max= 1121, per=4.18%, avg=929.32, stdev=118.52, samples=19 00:23:59.515 iops : min= 192, max= 280, avg=232.32, stdev=29.61, samples=19 00:23:59.515 lat (msec) : 50=19.96%, 100=74.05%, 250=5.99% 00:23:59.515 cpu : usr=39.25%, sys=1.18%, ctx=1290, majf=0, minf=9 00:23:59.515 IO depths : 1=2.2%, 2=4.5%, 4=12.5%, 8=69.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename1: (groupid=0, jobs=1): err= 0: pid=98637: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=223, BW=893KiB/s (914kB/s)(8936KiB/10010msec) 00:23:59.515 slat (nsec): min=3803, max=42560, avg=10199.87, stdev=3446.92 00:23:59.515 clat (msec): min=23, max=143, avg=71.63, stdev=20.97 00:23:59.515 lat (msec): min=23, max=143, avg=71.64, stdev=20.97 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 48], 00:23:59.515 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 73], 00:23:59.515 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 97], 95.00th=[ 108], 00:23:59.515 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:23:59.515 | 99.99th=[ 144] 00:23:59.515 bw ( KiB/s): min= 640, max= 1072, per=3.98%, avg=884.21, stdev=134.35, samples=19 00:23:59.515 iops : min= 160, max= 268, avg=221.05, stdev=33.59, samples=19 00:23:59.515 lat (msec) : 50=23.68%, 100=68.80%, 250=7.52% 00:23:59.515 cpu : usr=32.19%, sys=0.85%, ctx=843, majf=0, minf=9 00:23:59.515 IO depths : 1=1.5%, 2=3.3%, 4=11.4%, 8=71.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename1: (groupid=0, jobs=1): err= 0: pid=98638: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=199, BW=799KiB/s (818kB/s)(7992KiB/10008msec) 00:23:59.515 slat (nsec): min=3970, max=27662, avg=10330.10, stdev=3420.03 00:23:59.515 clat (msec): min=24, max=188, avg=80.07, stdev=23.00 00:23:59.515 lat (msec): min=24, max=188, avg=80.08, stdev=23.00 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 63], 00:23:59.515 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:23:59.515 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:23:59.515 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 190], 99.95th=[ 190], 00:23:59.515 | 99.99th=[ 190] 00:23:59.515 bw ( KiB/s): min= 680, max= 944, per=3.58%, avg=794.11, stdev=75.09, samples=19 00:23:59.515 iops : min= 170, max= 236, avg=198.53, stdev=18.77, samples=19 00:23:59.515 lat (msec) : 50=8.16%, 100=75.88%, 250=15.97% 00:23:59.515 cpu : usr=34.05%, sys=1.09%, ctx=965, majf=0, minf=9 00:23:59.515 IO depths : 1=2.3%, 2=5.0%, 4=15.1%, 8=67.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=1998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename2: (groupid=0, jobs=1): err= 0: pid=98639: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=205, BW=823KiB/s (843kB/s)(8236KiB/10008msec) 00:23:59.515 slat (usec): min=4, max=8018, avg=14.31, stdev=176.51 00:23:59.515 clat (msec): min=26, max=141, avg=77.63, stdev=20.32 00:23:59.515 lat (msec): min=26, max=141, avg=77.64, stdev=20.32 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:23:59.515 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:59.515 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 111], 00:23:59.515 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:23:59.515 | 99.99th=[ 142] 00:23:59.515 bw ( KiB/s): min= 640, max= 1072, per=3.69%, avg=819.79, stdev=112.83, samples=19 00:23:59.515 iops : min= 160, max= 268, avg=204.95, stdev=28.21, samples=19 00:23:59.515 lat (msec) : 50=10.20%, 100=79.12%, 250=10.68% 00:23:59.515 cpu : usr=31.98%, sys=0.94%, ctx=916, majf=0, minf=9 00:23:59.515 IO depths : 1=1.8%, 2=4.2%, 4=13.0%, 8=69.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename2: (groupid=0, jobs=1): err= 0: pid=98640: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=227, BW=911KiB/s (933kB/s)(9140KiB/10030msec) 00:23:59.515 slat (nsec): min=7533, max=35768, avg=10502.10, stdev=3588.70 00:23:59.515 clat (msec): min=20, max=158, avg=70.03, stdev=21.41 00:23:59.515 lat (msec): min=20, max=158, avg=70.05, stdev=21.41 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 25], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:23:59.515 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 72], 00:23:59.515 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:23:59.515 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 159], 99.95th=[ 159], 00:23:59.515 | 99.99th=[ 159] 00:23:59.515 bw ( KiB/s): min= 640, max= 1152, per=4.10%, avg=910.85, stdev=126.40, samples=20 00:23:59.515 iops : min= 160, max= 288, avg=227.70, stdev=31.58, samples=20 00:23:59.515 lat (msec) : 50=21.84%, 100=70.98%, 250=7.18% 00:23:59.515 cpu : usr=33.64%, sys=0.98%, ctx=936, majf=0, minf=9 00:23:59.515 IO depths : 1=1.2%, 2=2.9%, 4=11.0%, 8=72.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=2285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename2: (groupid=0, jobs=1): err= 0: pid=98641: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=210, BW=841KiB/s (862kB/s)(8416KiB/10003msec) 00:23:59.515 slat (usec): min=4, max=7028, avg=14.21, stdev=153.03 00:23:59.515 clat (msec): min=6, max=151, avg=75.96, stdev=20.78 00:23:59.515 lat (msec): min=6, max=151, avg=75.97, stdev=20.79 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 14], 5.00th=[ 43], 10.00th=[ 53], 20.00th=[ 63], 00:23:59.515 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:23:59.515 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 110], 00:23:59.515 | 99.00th=[ 125], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:23:59.515 | 99.99th=[ 153] 00:23:59.515 bw ( KiB/s): min= 640, max= 1120, per=3.74%, avg=831.16, stdev=109.17, samples=19 00:23:59.515 iops : min= 160, max= 280, avg=207.79, stdev=27.29, samples=19 00:23:59.515 lat (msec) : 10=0.76%, 20=0.76%, 50=7.32%, 100=79.09%, 250=12.07% 00:23:59.515 cpu : usr=42.10%, sys=1.22%, ctx=1239, majf=0, minf=9 00:23:59.515 IO depths : 1=1.0%, 2=2.3%, 4=10.2%, 8=73.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename2: (groupid=0, jobs=1): err= 0: pid=98642: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=246, BW=987KiB/s (1011kB/s)(9904KiB/10032msec) 00:23:59.515 slat (usec): min=7, max=8017, avg=16.60, stdev=191.95 00:23:59.515 clat (msec): min=16, max=141, avg=64.72, stdev=20.19 00:23:59.515 lat (msec): min=16, max=141, avg=64.74, stdev=20.19 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 47], 00:23:59.515 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 69], 00:23:59.515 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 107], 00:23:59.515 | 99.00th=[ 117], 99.50th=[ 134], 99.90th=[ 142], 99.95th=[ 142], 00:23:59.515 | 99.99th=[ 142] 00:23:59.515 bw ( KiB/s): min= 688, max= 1248, per=4.43%, avg=983.20, stdev=156.75, samples=20 00:23:59.515 iops : min= 172, max= 312, avg=245.75, stdev=39.18, samples=20 00:23:59.515 lat (msec) : 20=0.24%, 50=28.63%, 100=65.23%, 250=5.90% 00:23:59.515 cpu : usr=42.68%, sys=1.35%, ctx=1366, majf=0, minf=9 00:23:59.515 IO depths : 1=0.9%, 2=2.1%, 4=8.8%, 8=75.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename2: (groupid=0, jobs=1): err= 0: pid=98643: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=237, BW=950KiB/s (973kB/s)(9520KiB/10024msec) 00:23:59.515 slat (usec): min=4, max=4022, avg=15.46, stdev=130.18 00:23:59.515 clat (msec): min=33, max=143, avg=67.27, stdev=20.49 00:23:59.515 lat (msec): min=33, max=143, avg=67.28, stdev=20.49 00:23:59.515 clat percentiles (msec): 00:23:59.515 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 48], 00:23:59.515 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 66], 60.00th=[ 71], 00:23:59.515 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 108], 00:23:59.515 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:23:59.515 | 99.99th=[ 144] 00:23:59.515 bw ( KiB/s): min= 768, max= 1248, per=4.26%, avg=945.70, stdev=143.02, samples=20 00:23:59.515 iops : min= 192, max= 312, avg=236.40, stdev=35.72, samples=20 00:23:59.515 lat (msec) : 50=26.55%, 100=65.17%, 250=8.28% 00:23:59.515 cpu : usr=44.28%, sys=1.36%, ctx=1492, majf=0, minf=9 00:23:59.515 IO depths : 1=1.6%, 2=3.4%, 4=11.6%, 8=71.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:23:59.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.515 issued rwts: total=2380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.515 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.515 filename2: (groupid=0, jobs=1): err= 0: pid=98644: Mon Jul 15 19:52:23 2024 00:23:59.515 read: IOPS=216, BW=866KiB/s (887kB/s)(8672KiB/10013msec) 00:23:59.515 slat (usec): min=4, max=8024, avg=20.02, stdev=258.11 00:23:59.515 clat (msec): min=31, max=139, avg=73.76, stdev=19.97 00:23:59.515 lat (msec): min=32, max=139, avg=73.78, stdev=19.97 00:23:59.515 clat percentiles (msec): 00:23:59.516 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:23:59.516 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:23:59.516 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 111], 00:23:59.516 | 99.00th=[ 122], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:23:59.516 | 99.99th=[ 140] 00:23:59.516 bw ( KiB/s): min= 744, max= 1048, per=3.87%, avg=860.80, stdev=85.55, samples=20 00:23:59.516 iops : min= 186, max= 262, avg=215.20, stdev=21.39, samples=20 00:23:59.516 lat (msec) : 50=14.76%, 100=72.23%, 250=13.01% 00:23:59.516 cpu : usr=42.13%, sys=1.18%, ctx=1437, majf=0, minf=9 00:23:59.516 IO depths : 1=2.1%, 2=4.5%, 4=12.8%, 8=69.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:23:59.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.516 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.516 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.516 filename2: (groupid=0, jobs=1): err= 0: pid=98645: Mon Jul 15 19:52:23 2024 00:23:59.516 read: IOPS=254, BW=1019KiB/s (1043kB/s)(10.00MiB/10047msec) 00:23:59.516 slat (usec): min=3, max=8020, avg=20.75, stdev=275.17 00:23:59.516 clat (msec): min=3, max=129, avg=62.62, stdev=21.12 00:23:59.516 lat (msec): min=3, max=129, avg=62.64, stdev=21.13 00:23:59.516 clat percentiles (msec): 00:23:59.516 | 1.00th=[ 6], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 47], 00:23:59.516 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 71], 00:23:59.516 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 97], 00:23:59.516 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:23:59.516 | 99.99th=[ 130] 00:23:59.516 bw ( KiB/s): min= 816, max= 1856, per=4.58%, avg=1016.95, stdev=235.11, samples=20 00:23:59.516 iops : min= 204, max= 464, avg=254.10, stdev=58.78, samples=20 00:23:59.516 lat (msec) : 4=0.90%, 10=1.25%, 20=0.63%, 50=32.16%, 100=62.13% 00:23:59.516 lat (msec) : 250=2.93% 00:23:59.516 cpu : usr=35.40%, sys=0.98%, ctx=1049, majf=0, minf=0 00:23:59.516 IO depths : 1=0.3%, 2=0.7%, 4=6.2%, 8=79.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:23:59.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.516 complete : 0=0.0%, 4=89.3%, 8=6.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.516 issued rwts: total=2559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.516 filename2: (groupid=0, jobs=1): err= 0: pid=98646: Mon Jul 15 19:52:23 2024 00:23:59.516 read: IOPS=231, BW=924KiB/s (947kB/s)(9292KiB/10052msec) 00:23:59.516 slat (usec): min=3, max=8019, avg=19.33, stdev=256.92 00:23:59.516 clat (msec): min=18, max=143, avg=69.08, stdev=21.50 00:23:59.516 lat (msec): min=18, max=143, avg=69.10, stdev=21.51 00:23:59.516 clat percentiles (msec): 00:23:59.516 | 1.00th=[ 23], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 49], 00:23:59.516 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:23:59.516 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 108], 00:23:59.516 | 99.00th=[ 123], 99.50th=[ 123], 99.90th=[ 144], 99.95th=[ 144], 00:23:59.516 | 99.99th=[ 144] 00:23:59.516 bw ( KiB/s): min= 680, max= 1280, per=4.15%, avg=922.55, stdev=165.99, samples=20 00:23:59.516 iops : min= 170, max= 320, avg=230.60, stdev=41.51, samples=20 00:23:59.516 lat (msec) : 20=0.69%, 50=22.90%, 100=68.96%, 250=7.45% 00:23:59.516 cpu : usr=35.90%, sys=1.03%, ctx=1069, majf=0, minf=9 00:23:59.516 IO depths : 1=1.6%, 2=3.9%, 4=12.5%, 8=70.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:23:59.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.516 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.516 issued rwts: total=2323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.516 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:59.516 00:23:59.516 Run status group 0 (all jobs): 00:23:59.516 READ: bw=21.7MiB/s (22.7MB/s), 799KiB/s-1055KiB/s (818kB/s-1080kB/s), io=218MiB (229MB), run=10002-10055msec 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 bdev_null0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 [2024-07-15 19:52:23.513000] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 bdev_null1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:59.516 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:59.517 { 00:23:59.517 "params": { 00:23:59.517 "name": "Nvme$subsystem", 00:23:59.517 "trtype": "$TEST_TRANSPORT", 00:23:59.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.517 "adrfam": "ipv4", 00:23:59.517 "trsvcid": "$NVMF_PORT", 00:23:59.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.517 "hdgst": ${hdgst:-false}, 00:23:59.517 "ddgst": ${ddgst:-false} 00:23:59.517 }, 00:23:59.517 "method": "bdev_nvme_attach_controller" 00:23:59.517 } 00:23:59.517 EOF 00:23:59.517 )") 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:59.517 { 00:23:59.517 "params": { 00:23:59.517 "name": "Nvme$subsystem", 00:23:59.517 "trtype": "$TEST_TRANSPORT", 00:23:59.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.517 "adrfam": "ipv4", 00:23:59.517 "trsvcid": "$NVMF_PORT", 00:23:59.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.517 "hdgst": ${hdgst:-false}, 00:23:59.517 "ddgst": ${ddgst:-false} 00:23:59.517 }, 00:23:59.517 "method": "bdev_nvme_attach_controller" 00:23:59.517 } 00:23:59.517 EOF 00:23:59.517 )") 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:59.517 "params": { 00:23:59.517 "name": "Nvme0", 00:23:59.517 "trtype": "tcp", 00:23:59.517 "traddr": "10.0.0.2", 00:23:59.517 "adrfam": "ipv4", 00:23:59.517 "trsvcid": "4420", 00:23:59.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:59.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:59.517 "hdgst": false, 00:23:59.517 "ddgst": false 00:23:59.517 }, 00:23:59.517 "method": "bdev_nvme_attach_controller" 00:23:59.517 },{ 00:23:59.517 "params": { 00:23:59.517 "name": "Nvme1", 00:23:59.517 "trtype": "tcp", 00:23:59.517 "traddr": "10.0.0.2", 00:23:59.517 "adrfam": "ipv4", 00:23:59.517 "trsvcid": "4420", 00:23:59.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.517 "hdgst": false, 00:23:59.517 "ddgst": false 00:23:59.517 }, 00:23:59.517 "method": "bdev_nvme_attach_controller" 00:23:59.517 }' 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:59.517 19:52:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:59.517 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:59.517 ... 00:23:59.517 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:59.517 ... 00:23:59.517 fio-3.35 00:23:59.517 Starting 4 threads 00:24:03.717 00:24:03.717 filename0: (groupid=0, jobs=1): err= 0: pid=98773: Mon Jul 15 19:52:29 2024 00:24:03.717 read: IOPS=1980, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5002msec) 00:24:03.717 slat (nsec): min=7758, max=73847, avg=15740.10, stdev=4267.32 00:24:03.717 clat (usec): min=3072, max=4911, avg=3962.84, stdev=73.89 00:24:03.717 lat (usec): min=3084, max=4925, avg=3978.58, stdev=74.29 00:24:03.717 clat percentiles (usec): 00:24:03.717 | 1.00th=[ 3818], 5.00th=[ 3884], 10.00th=[ 3884], 20.00th=[ 3916], 00:24:03.717 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3949], 60.00th=[ 3982], 00:24:03.717 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:24:03.717 | 99.00th=[ 4113], 99.50th=[ 4146], 99.90th=[ 4555], 99.95th=[ 4621], 00:24:03.717 | 99.99th=[ 4883] 00:24:03.717 bw ( KiB/s): min=15744, max=16000, per=25.01%, avg=15843.56, stdev=85.33, samples=9 00:24:03.717 iops : min= 1968, max= 2000, avg=1980.44, stdev=10.67, samples=9 00:24:03.717 lat (msec) : 4=74.40%, 10=25.60% 00:24:03.717 cpu : usr=93.58%, sys=5.24%, ctx=84, majf=0, minf=9 00:24:03.717 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.717 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.717 issued rwts: total=9904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.717 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.717 filename0: (groupid=0, jobs=1): err= 0: pid=98774: Mon Jul 15 19:52:29 2024 00:24:03.718 read: IOPS=1978, BW=15.5MiB/s (16.2MB/s)(77.3MiB/5001msec) 00:24:03.718 slat (nsec): min=6899, max=52205, avg=15243.15, stdev=4462.37 00:24:03.718 clat (usec): min=2719, max=7253, avg=3966.38, stdev=146.90 00:24:03.718 lat (usec): min=2729, max=7280, avg=3981.62, stdev=147.20 00:24:03.718 clat percentiles (usec): 00:24:03.718 | 1.00th=[ 3818], 5.00th=[ 3884], 10.00th=[ 3884], 20.00th=[ 3916], 00:24:03.718 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3949], 60.00th=[ 3982], 00:24:03.718 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:24:03.718 | 99.00th=[ 4146], 99.50th=[ 4178], 99.90th=[ 6259], 99.95th=[ 7242], 00:24:03.718 | 99.99th=[ 7242] 00:24:03.718 bw ( KiB/s): min=15647, max=16000, per=24.99%, avg=15832.78, stdev=103.64, samples=9 00:24:03.718 iops : min= 1955, max= 2000, avg=1979.00, stdev=13.15, samples=9 00:24:03.718 lat (msec) : 4=74.23%, 10=25.77% 00:24:03.718 cpu : usr=94.30%, sys=4.60%, ctx=15, majf=0, minf=9 00:24:03.718 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.718 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.718 issued rwts: total=9896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.718 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.718 filename1: (groupid=0, jobs=1): err= 0: pid=98775: Mon Jul 15 19:52:29 2024 00:24:03.718 read: IOPS=1982, BW=15.5MiB/s (16.2MB/s)(77.5MiB/5004msec) 00:24:03.718 slat (nsec): min=6486, max=38179, avg=8523.06, stdev=2478.80 00:24:03.718 clat (usec): min=1268, max=4600, avg=3991.84, stdev=124.36 00:24:03.718 lat (usec): min=1281, max=4613, avg=4000.36, stdev=124.22 00:24:03.718 clat percentiles (usec): 00:24:03.718 | 1.00th=[ 3851], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3949], 00:24:03.718 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:03.718 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4080], 95.00th=[ 4080], 00:24:03.718 | 99.00th=[ 4146], 99.50th=[ 4178], 99.90th=[ 4555], 99.95th=[ 4555], 00:24:03.718 | 99.99th=[ 4621] 00:24:03.718 bw ( KiB/s): min=15744, max=16000, per=25.06%, avg=15872.00, stdev=90.51, samples=9 00:24:03.718 iops : min= 1968, max= 2000, avg=1984.00, stdev=11.31, samples=9 00:24:03.718 lat (msec) : 2=0.16%, 4=57.53%, 10=42.31% 00:24:03.718 cpu : usr=94.34%, sys=4.58%, ctx=8, majf=0, minf=0 00:24:03.718 IO depths : 1=11.2%, 2=25.0%, 4=50.0%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.718 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.718 issued rwts: total=9920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.718 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.718 filename1: (groupid=0, jobs=1): err= 0: pid=98776: Mon Jul 15 19:52:29 2024 00:24:03.718 read: IOPS=1979, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5003msec) 00:24:03.718 slat (usec): min=6, max=152, avg=13.32, stdev= 5.51 00:24:03.718 clat (usec): min=3015, max=5009, avg=3981.44, stdev=86.74 00:24:03.718 lat (usec): min=3028, max=5023, avg=3994.77, stdev=85.78 00:24:03.718 clat percentiles (usec): 00:24:03.718 | 1.00th=[ 3818], 5.00th=[ 3884], 10.00th=[ 3916], 20.00th=[ 3916], 00:24:03.718 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 3982], 00:24:03.718 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:24:03.718 | 99.00th=[ 4146], 99.50th=[ 4228], 99.90th=[ 4883], 99.95th=[ 4948], 00:24:03.718 | 99.99th=[ 5014] 00:24:03.718 bw ( KiB/s): min=15744, max=16000, per=25.01%, avg=15843.56, stdev=85.33, samples=9 00:24:03.718 iops : min= 1968, max= 2000, avg=1980.44, stdev=10.67, samples=9 00:24:03.718 lat (msec) : 4=64.45%, 10=35.55% 00:24:03.718 cpu : usr=94.18%, sys=4.66%, ctx=57, majf=0, minf=0 00:24:03.718 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.718 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.718 issued rwts: total=9904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.718 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:03.718 00:24:03.718 Run status group 0 (all jobs): 00:24:03.718 READ: bw=61.9MiB/s (64.9MB/s), 15.5MiB/s-15.5MiB/s (16.2MB/s-16.2MB/s), io=310MiB (325MB), run=5001-5004msec 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.977 ************************************ 00:24:03.977 END TEST fio_dif_rand_params 00:24:03.977 ************************************ 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.977 00:24:03.977 real 0m23.640s 00:24:03.977 user 2m5.964s 00:24:03.977 sys 0m5.384s 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.977 19:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.977 19:52:29 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:03.977 19:52:29 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:03.977 19:52:29 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:03.977 19:52:29 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.977 19:52:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.978 ************************************ 00:24:03.978 START TEST fio_dif_digest 00:24:03.978 ************************************ 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.978 bdev_null0 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.978 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:04.272 [2024-07-15 19:52:29.763908] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.272 { 00:24:04.272 "params": { 00:24:04.272 "name": "Nvme$subsystem", 00:24:04.272 "trtype": "$TEST_TRANSPORT", 00:24:04.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.272 "adrfam": "ipv4", 00:24:04.272 "trsvcid": "$NVMF_PORT", 00:24:04.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.272 "hdgst": ${hdgst:-false}, 00:24:04.272 "ddgst": ${ddgst:-false} 00:24:04.272 }, 00:24:04.272 "method": "bdev_nvme_attach_controller" 00:24:04.272 } 00:24:04.272 EOF 00:24:04.272 )") 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:04.272 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:04.273 "params": { 00:24:04.273 "name": "Nvme0", 00:24:04.273 "trtype": "tcp", 00:24:04.273 "traddr": "10.0.0.2", 00:24:04.273 "adrfam": "ipv4", 00:24:04.273 "trsvcid": "4420", 00:24:04.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:04.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:04.273 "hdgst": true, 00:24:04.273 "ddgst": true 00:24:04.273 }, 00:24:04.273 "method": "bdev_nvme_attach_controller" 00:24:04.273 }' 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:04.273 19:52:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:04.273 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:04.273 ... 00:24:04.273 fio-3.35 00:24:04.273 Starting 3 threads 00:24:16.471 00:24:16.471 filename0: (groupid=0, jobs=1): err= 0: pid=98882: Mon Jul 15 19:52:40 2024 00:24:16.471 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(278MiB/10007msec) 00:24:16.471 slat (nsec): min=7009, max=42609, avg=12197.74, stdev=3789.56 00:24:16.471 clat (usec): min=8959, max=17153, avg=13491.27, stdev=1025.20 00:24:16.471 lat (usec): min=8979, max=17180, avg=13503.46, stdev=1024.95 00:24:16.471 clat percentiles (usec): 00:24:16.471 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12125], 20.00th=[12649], 00:24:16.471 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:24:16.471 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14746], 95.00th=[15008], 00:24:16.471 | 99.00th=[15926], 99.50th=[16319], 99.90th=[16712], 99.95th=[17171], 00:24:16.471 | 99.99th=[17171] 00:24:16.471 bw ( KiB/s): min=26880, max=29696, per=34.68%, avg=28426.53, stdev=749.59, samples=19 00:24:16.471 iops : min= 210, max= 232, avg=222.05, stdev= 5.89, samples=19 00:24:16.471 lat (msec) : 10=0.45%, 20=99.55% 00:24:16.471 cpu : usr=93.18%, sys=5.47%, ctx=14, majf=0, minf=0 00:24:16.471 IO depths : 1=3.4%, 2=96.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:16.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.471 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.471 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:16.471 filename0: (groupid=0, jobs=1): err= 0: pid=98883: Mon Jul 15 19:52:40 2024 00:24:16.471 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(308MiB/10008msec) 00:24:16.471 slat (nsec): min=7200, max=39744, avg=12529.42, stdev=2983.30 00:24:16.471 clat (usec): min=8697, max=53982, avg=12173.87, stdev=1622.14 00:24:16.471 lat (usec): min=8709, max=53994, avg=12186.40, stdev=1622.19 00:24:16.471 clat percentiles (usec): 00:24:16.471 | 1.00th=[10290], 5.00th=[10814], 10.00th=[11207], 20.00th=[11469], 00:24:16.471 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:24:16.471 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13042], 95.00th=[13304], 00:24:16.471 | 99.00th=[13698], 99.50th=[13829], 99.90th=[53216], 99.95th=[53216], 00:24:16.471 | 99.99th=[53740] 00:24:16.471 bw ( KiB/s): min=30720, max=33024, per=38.62%, avg=31649.68, stdev=651.06, samples=19 00:24:16.471 iops : min= 240, max= 258, avg=247.26, stdev= 5.09, samples=19 00:24:16.471 lat (msec) : 10=0.45%, 20=99.43%, 100=0.12% 00:24:16.471 cpu : usr=92.53%, sys=6.07%, ctx=24, majf=0, minf=9 00:24:16.471 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:16.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.471 issued rwts: total=2463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.471 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:16.471 filename0: (groupid=0, jobs=1): err= 0: pid=98884: Mon Jul 15 19:52:40 2024 00:24:16.471 read: IOPS=172, BW=21.5MiB/s (22.6MB/s)(215MiB/10007msec) 00:24:16.471 slat (nsec): min=6942, max=40412, avg=12193.44, stdev=4134.44 00:24:16.471 clat (usec): min=8259, max=20264, avg=17399.64, stdev=1122.76 00:24:16.471 lat (usec): min=8272, max=20279, avg=17411.83, stdev=1122.90 00:24:16.471 clat percentiles (usec): 00:24:16.471 | 1.00th=[15008], 5.00th=[15926], 10.00th=[16188], 20.00th=[16712], 00:24:16.471 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:24:16.471 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:24:16.471 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20317], 99.95th=[20317], 00:24:16.471 | 99.99th=[20317] 00:24:16.471 bw ( KiB/s): min=21504, max=23040, per=26.81%, avg=21975.58, stdev=492.15, samples=19 00:24:16.471 iops : min= 168, max= 180, avg=171.68, stdev= 3.84, samples=19 00:24:16.471 lat (msec) : 10=0.52%, 20=99.36%, 50=0.12% 00:24:16.471 cpu : usr=91.96%, sys=6.57%, ctx=105, majf=0, minf=9 00:24:16.471 IO depths : 1=14.7%, 2=85.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:16.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.471 issued rwts: total=1723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.471 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:16.471 00:24:16.471 Run status group 0 (all jobs): 00:24:16.471 READ: bw=80.0MiB/s (83.9MB/s), 21.5MiB/s-30.8MiB/s (22.6MB/s-32.3MB/s), io=801MiB (840MB), run=10007-10008msec 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.471 00:24:16.471 real 0m10.982s 00:24:16.471 user 0m28.403s 00:24:16.471 sys 0m2.068s 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.471 19:52:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:16.471 ************************************ 00:24:16.471 END TEST fio_dif_digest 00:24:16.471 ************************************ 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:16.471 19:52:40 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:16.471 19:52:40 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.471 rmmod nvme_tcp 00:24:16.471 rmmod nvme_fabrics 00:24:16.471 rmmod nvme_keyring 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 98125 ']' 00:24:16.471 19:52:40 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 98125 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 98125 ']' 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 98125 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98125 00:24:16.471 killing process with pid 98125 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98125' 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@967 -- # kill 98125 00:24:16.471 19:52:40 nvmf_dif -- common/autotest_common.sh@972 -- # wait 98125 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:16.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:16.471 Waiting for block devices as requested 00:24:16.471 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:16.471 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.471 19:52:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:16.471 19:52:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.471 19:52:41 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:16.471 ************************************ 00:24:16.471 END TEST nvmf_dif 00:24:16.471 ************************************ 00:24:16.471 00:24:16.471 real 0m59.985s 00:24:16.471 user 3m51.235s 00:24:16.471 sys 0m15.415s 00:24:16.471 19:52:41 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.471 19:52:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:16.471 19:52:41 -- common/autotest_common.sh@1142 -- # return 0 00:24:16.471 19:52:41 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:16.471 19:52:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:16.471 19:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.471 19:52:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.471 ************************************ 00:24:16.471 START TEST nvmf_abort_qd_sizes 00:24:16.471 ************************************ 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:16.471 * Looking for test storage... 00:24:16.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.471 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:16.472 Cannot find device "nvmf_tgt_br" 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:16.472 Cannot find device "nvmf_tgt_br2" 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:16.472 Cannot find device "nvmf_tgt_br" 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:16.472 Cannot find device "nvmf_tgt_br2" 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:16.472 19:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:16.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:16.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:16.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:16.472 00:24:16.472 --- 10.0.0.2 ping statistics --- 00:24:16.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.472 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:16.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:16.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:24:16.472 00:24:16.472 --- 10.0.0.3 ping statistics --- 00:24:16.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.472 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:16.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:24:16.472 00:24:16.472 --- 10.0.0.1 ping statistics --- 00:24:16.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.472 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:16.472 19:52:42 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:17.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.409 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:17.409 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99475 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99475 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99475 ']' 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.409 19:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:17.409 [2024-07-15 19:52:43.156113] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:24:17.409 [2024-07-15 19:52:43.156269] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.666 [2024-07-15 19:52:43.298030] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.923 [2024-07-15 19:52:43.459903] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.923 [2024-07-15 19:52:43.459992] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.923 [2024-07-15 19:52:43.460019] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.923 [2024-07-15 19:52:43.460030] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.923 [2024-07-15 19:52:43.460040] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.923 [2024-07-15 19:52:43.460206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.923 [2024-07-15 19:52:43.460339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.923 [2024-07-15 19:52:43.460968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.923 [2024-07-15 19:52:43.461018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:18.490 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.749 19:52:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 ************************************ 00:24:18.749 START TEST spdk_target_abort 00:24:18.749 ************************************ 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 spdk_targetn1 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 [2024-07-15 19:52:44.371819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 [2024-07-15 19:52:44.400008] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:18.749 19:52:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:22.028 Initializing NVMe Controllers 00:24:22.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:22.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:22.028 Initialization complete. Launching workers. 00:24:22.028 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11076, failed: 0 00:24:22.028 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1057, failed to submit 10019 00:24:22.028 success 738, unsuccess 319, failed 0 00:24:22.028 19:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:22.028 19:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:25.312 Initializing NVMe Controllers 00:24:25.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:25.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:25.312 Initialization complete. Launching workers. 00:24:25.312 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5909, failed: 0 00:24:25.312 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 4661 00:24:25.312 success 267, unsuccess 981, failed 0 00:24:25.312 19:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:25.312 19:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:28.594 Initializing NVMe Controllers 00:24:28.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:28.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:28.594 Initialization complete. Launching workers. 00:24:28.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30328, failed: 0 00:24:28.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2646, failed to submit 27682 00:24:28.594 success 439, unsuccess 2207, failed 0 00:24:28.594 19:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:28.594 19:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.594 19:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:28.594 19:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.595 19:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:28.595 19:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.595 19:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99475 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99475 ']' 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99475 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99475 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:29.531 killing process with pid 99475 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99475' 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99475 00:24:29.531 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99475 00:24:29.789 00:24:29.789 real 0m11.231s 00:24:29.789 user 0m45.660s 00:24:29.789 sys 0m1.774s 00:24:29.789 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.789 ************************************ 00:24:29.789 19:52:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.789 END TEST spdk_target_abort 00:24:29.789 ************************************ 00:24:29.789 19:52:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:29.789 19:52:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:29.789 19:52:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:29.789 19:52:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.790 19:52:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.790 ************************************ 00:24:29.790 START TEST kernel_target_abort 00:24:29.790 ************************************ 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:29.790 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:30.049 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:30.049 19:52:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:30.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:30.308 Waiting for block devices as requested 00:24:30.308 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:30.308 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:30.566 No valid GPT data, bailing 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:30.566 No valid GPT data, bailing 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:30.566 No valid GPT data, bailing 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:30.566 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:30.849 No valid GPT data, bailing 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb --hostid=da6ed89b-4087-4382-817f-6cf647cbfbeb -a 10.0.0.1 -t tcp -s 4420 00:24:30.849 00:24:30.849 Discovery Log Number of Records 2, Generation counter 2 00:24:30.849 =====Discovery Log Entry 0====== 00:24:30.849 trtype: tcp 00:24:30.849 adrfam: ipv4 00:24:30.849 subtype: current discovery subsystem 00:24:30.849 treq: not specified, sq flow control disable supported 00:24:30.849 portid: 1 00:24:30.849 trsvcid: 4420 00:24:30.849 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:30.849 traddr: 10.0.0.1 00:24:30.849 eflags: none 00:24:30.849 sectype: none 00:24:30.849 =====Discovery Log Entry 1====== 00:24:30.849 trtype: tcp 00:24:30.849 adrfam: ipv4 00:24:30.849 subtype: nvme subsystem 00:24:30.849 treq: not specified, sq flow control disable supported 00:24:30.849 portid: 1 00:24:30.849 trsvcid: 4420 00:24:30.849 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:30.849 traddr: 10.0.0.1 00:24:30.849 eflags: none 00:24:30.849 sectype: none 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:30.849 19:52:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:34.177 Initializing NVMe Controllers 00:24:34.177 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:34.177 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:34.177 Initialization complete. Launching workers. 00:24:34.177 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32408, failed: 0 00:24:34.177 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32408, failed to submit 0 00:24:34.177 success 0, unsuccess 32408, failed 0 00:24:34.177 19:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:34.177 19:52:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:37.517 Initializing NVMe Controllers 00:24:37.517 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:37.517 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:37.517 Initialization complete. Launching workers. 00:24:37.517 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68814, failed: 0 00:24:37.517 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29795, failed to submit 39019 00:24:37.517 success 0, unsuccess 29795, failed 0 00:24:37.517 19:53:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:37.517 19:53:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.800 Initializing NVMe Controllers 00:24:40.800 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:40.800 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:40.800 Initialization complete. Launching workers. 00:24:40.800 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82080, failed: 0 00:24:40.800 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20494, failed to submit 61586 00:24:40.800 success 0, unsuccess 20494, failed 0 00:24:40.800 19:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:40.800 19:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:40.800 19:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:40.800 19:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.800 19:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.800 19:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:40.800 19:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.800 19:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:40.800 19:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:40.800 19:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:41.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:42.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:42.957 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:43.251 00:24:43.251 real 0m13.183s 00:24:43.251 user 0m6.136s 00:24:43.251 sys 0m4.454s 00:24:43.251 19:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.251 ************************************ 00:24:43.251 19:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:43.251 END TEST kernel_target_abort 00:24:43.251 ************************************ 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.251 rmmod nvme_tcp 00:24:43.251 rmmod nvme_fabrics 00:24:43.251 rmmod nvme_keyring 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99475 ']' 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99475 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99475 ']' 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99475 00:24:43.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99475) - No such process 00:24:43.251 Process with pid 99475 is not found 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99475 is not found' 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:43.251 19:53:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:43.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:43.520 Waiting for block devices as requested 00:24:43.520 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:43.778 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:43.778 00:24:43.778 real 0m27.715s 00:24:43.778 user 0m53.048s 00:24:43.778 sys 0m7.592s 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.778 19:53:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:43.778 ************************************ 00:24:43.778 END TEST nvmf_abort_qd_sizes 00:24:43.778 ************************************ 00:24:43.778 19:53:09 -- common/autotest_common.sh@1142 -- # return 0 00:24:43.778 19:53:09 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:43.778 19:53:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:43.778 19:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.778 19:53:09 -- common/autotest_common.sh@10 -- # set +x 00:24:43.778 ************************************ 00:24:43.778 START TEST keyring_file 00:24:43.778 ************************************ 00:24:43.778 19:53:09 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:44.035 * Looking for test storage... 00:24:44.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:44.035 19:53:09 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:44.035 19:53:09 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.035 19:53:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.036 19:53:09 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.036 19:53:09 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.036 19:53:09 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.036 19:53:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.036 19:53:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.036 19:53:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.036 19:53:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:44.036 19:53:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KR3z4PWIGo 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KR3z4PWIGo 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KR3z4PWIGo 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.KR3z4PWIGo 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OK3pvB0Amu 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:44.036 19:53:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OK3pvB0Amu 00:24:44.036 19:53:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OK3pvB0Amu 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.OK3pvB0Amu 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=100356 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.036 19:53:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100356 00:24:44.036 19:53:09 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100356 ']' 00:24:44.036 19:53:09 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.036 19:53:09 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.036 19:53:09 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.036 19:53:09 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.036 19:53:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 [2024-07-15 19:53:09.823822] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:24:44.294 [2024-07-15 19:53:09.823938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100356 ] 00:24:44.294 [2024-07-15 19:53:09.965493] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.552 [2024-07-15 19:53:10.089487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:45.119 19:53:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:45.119 [2024-07-15 19:53:10.826856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.119 null0 00:24:45.119 [2024-07-15 19:53:10.858832] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:45.119 [2024-07-15 19:53:10.859043] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:45.119 [2024-07-15 19:53:10.866832] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.119 19:53:10 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:45.119 [2024-07-15 19:53:10.878818] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:45.119 2024/07/15 19:53:10 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:24:45.119 request: 00:24:45.119 { 00:24:45.119 "method": "nvmf_subsystem_add_listener", 00:24:45.119 "params": { 00:24:45.119 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.119 "secure_channel": false, 00:24:45.119 "listen_address": { 00:24:45.119 "trtype": "tcp", 00:24:45.119 "traddr": "127.0.0.1", 00:24:45.119 "trsvcid": "4420" 00:24:45.119 } 00:24:45.119 } 00:24:45.119 } 00:24:45.119 Got JSON-RPC error response 00:24:45.119 GoRPCClient: error on JSON-RPC call 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:45.119 19:53:10 keyring_file -- keyring/file.sh@46 -- # bperfpid=100391 00:24:45.119 19:53:10 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:45.119 19:53:10 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100391 /var/tmp/bperf.sock 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100391 ']' 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:45.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.119 19:53:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:45.377 [2024-07-15 19:53:10.945412] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:24:45.377 [2024-07-15 19:53:10.945513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100391 ] 00:24:45.377 [2024-07-15 19:53:11.083458] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.635 [2024-07-15 19:53:11.198413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.202 19:53:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.202 19:53:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:46.202 19:53:11 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:46.202 19:53:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:46.460 19:53:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OK3pvB0Amu 00:24:46.460 19:53:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OK3pvB0Amu 00:24:46.718 19:53:12 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:46.718 19:53:12 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:46.718 19:53:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.718 19:53:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.718 19:53:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.283 19:53:12 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.KR3z4PWIGo == \/\t\m\p\/\t\m\p\.\K\R\3\z\4\P\W\I\G\o ]] 00:24:47.283 19:53:12 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:47.283 19:53:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:47.283 19:53:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.283 19:53:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.283 19:53:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:47.542 19:53:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OK3pvB0Amu == \/\t\m\p\/\t\m\p\.\O\K\3\p\v\B\0\A\m\u ]] 00:24:47.542 19:53:13 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:47.542 19:53:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:47.542 19:53:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.542 19:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.542 19:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.542 19:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:47.800 19:53:13 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:47.800 19:53:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:47.800 19:53:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:47.800 19:53:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.800 19:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.800 19:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.800 19:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:48.057 19:53:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:48.057 19:53:13 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:48.057 19:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:48.315 [2024-07-15 19:53:13.867205] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.315 nvme0n1 00:24:48.315 19:53:13 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:48.315 19:53:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:48.315 19:53:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:48.315 19:53:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:48.315 19:53:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:48.315 19:53:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:48.574 19:53:14 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:48.574 19:53:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:48.574 19:53:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:48.574 19:53:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:48.574 19:53:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:48.574 19:53:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:48.574 19:53:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:48.832 19:53:14 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:48.832 19:53:14 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.832 Running I/O for 1 seconds... 00:24:50.208 00:24:50.208 Latency(us) 00:24:50.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.208 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:50.208 nvme0n1 : 1.01 12374.82 48.34 0.00 0.00 10303.85 6166.34 18230.92 00:24:50.208 =================================================================================================================== 00:24:50.208 Total : 12374.82 48.34 0.00 0.00 10303.85 6166.34 18230.92 00:24:50.208 0 00:24:50.208 19:53:15 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:50.208 19:53:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:50.208 19:53:15 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:50.208 19:53:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:50.208 19:53:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.208 19:53:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.208 19:53:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.208 19:53:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:50.466 19:53:16 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:50.466 19:53:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:50.466 19:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:50.466 19:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.466 19:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.466 19:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:50.466 19:53:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.723 19:53:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:50.723 19:53:16 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.723 19:53:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:50.723 19:53:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.723 19:53:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:50.723 19:53:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.723 19:53:16 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:50.723 19:53:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.723 19:53:16 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.723 19:53:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:50.980 [2024-07-15 19:53:16.643258] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:50.980 [2024-07-15 19:53:16.643479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b30e0 (107): Transport endpoint is not connected 00:24:50.980 [2024-07-15 19:53:16.644468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b30e0 (9): Bad file descriptor 00:24:50.980 [2024-07-15 19:53:16.645465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.980 [2024-07-15 19:53:16.645489] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:50.980 [2024-07-15 19:53:16.645499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.980 2024/07/15 19:53:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:50.980 request: 00:24:50.980 { 00:24:50.980 "method": "bdev_nvme_attach_controller", 00:24:50.980 "params": { 00:24:50.980 "name": "nvme0", 00:24:50.980 "trtype": "tcp", 00:24:50.980 "traddr": "127.0.0.1", 00:24:50.980 "adrfam": "ipv4", 00:24:50.980 "trsvcid": "4420", 00:24:50.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:50.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:50.980 "prchk_reftag": false, 00:24:50.980 "prchk_guard": false, 00:24:50.980 "hdgst": false, 00:24:50.980 "ddgst": false, 00:24:50.980 "psk": "key1" 00:24:50.980 } 00:24:50.980 } 00:24:50.980 Got JSON-RPC error response 00:24:50.980 GoRPCClient: error on JSON-RPC call 00:24:50.980 19:53:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:50.980 19:53:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:50.981 19:53:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:50.981 19:53:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:50.981 19:53:16 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:50.981 19:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.981 19:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:50.981 19:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.981 19:53:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.981 19:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:51.238 19:53:16 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:51.238 19:53:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:51.238 19:53:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:51.238 19:53:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:51.238 19:53:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:51.238 19:53:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:51.238 19:53:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:51.494 19:53:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:51.494 19:53:17 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:51.494 19:53:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:51.750 19:53:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:51.750 19:53:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:52.006 19:53:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:52.006 19:53:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:52.006 19:53:17 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:52.264 19:53:17 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:52.264 19:53:17 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.KR3z4PWIGo 00:24:52.264 19:53:17 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:52.264 19:53:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:52.264 19:53:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:52.264 19:53:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:52.264 19:53:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.264 19:53:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:52.264 19:53:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:52.264 19:53:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:52.264 19:53:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:52.521 [2024-07-15 19:53:18.222942] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KR3z4PWIGo': 0100660 00:24:52.521 [2024-07-15 19:53:18.222985] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:52.521 2024/07/15 19:53:18 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.KR3z4PWIGo], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:24:52.521 request: 00:24:52.521 { 00:24:52.521 "method": "keyring_file_add_key", 00:24:52.521 "params": { 00:24:52.521 "name": "key0", 00:24:52.521 "path": "/tmp/tmp.KR3z4PWIGo" 00:24:52.521 } 00:24:52.521 } 00:24:52.521 Got JSON-RPC error response 00:24:52.521 GoRPCClient: error on JSON-RPC call 00:24:52.521 19:53:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:52.521 19:53:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:52.521 19:53:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:52.521 19:53:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:52.521 19:53:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.KR3z4PWIGo 00:24:52.521 19:53:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:52.521 19:53:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KR3z4PWIGo 00:24:52.778 19:53:18 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.KR3z4PWIGo 00:24:52.778 19:53:18 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:52.778 19:53:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:52.778 19:53:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:52.778 19:53:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:52.778 19:53:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:52.778 19:53:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:53.341 19:53:18 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:53.341 19:53:18 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:53.341 19:53:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:53.341 19:53:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:53.341 19:53:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:53.341 19:53:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.341 19:53:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:53.341 19:53:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.341 19:53:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:53.341 19:53:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:53.341 [2024-07-15 19:53:19.035190] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.KR3z4PWIGo': No such file or directory 00:24:53.341 [2024-07-15 19:53:19.035228] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:53.341 [2024-07-15 19:53:19.035254] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:53.341 [2024-07-15 19:53:19.035263] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:53.341 [2024-07-15 19:53:19.035273] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:53.342 2024/07/15 19:53:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:24:53.342 request: 00:24:53.342 { 00:24:53.342 "method": "bdev_nvme_attach_controller", 00:24:53.342 "params": { 00:24:53.342 "name": "nvme0", 00:24:53.342 "trtype": "tcp", 00:24:53.342 "traddr": "127.0.0.1", 00:24:53.342 "adrfam": "ipv4", 00:24:53.342 "trsvcid": "4420", 00:24:53.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:53.342 "prchk_reftag": false, 00:24:53.342 "prchk_guard": false, 00:24:53.342 "hdgst": false, 00:24:53.342 "ddgst": false, 00:24:53.342 "psk": "key0" 00:24:53.342 } 00:24:53.342 } 00:24:53.342 Got JSON-RPC error response 00:24:53.342 GoRPCClient: error on JSON-RPC call 00:24:53.342 19:53:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:53.342 19:53:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:53.342 19:53:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:53.342 19:53:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:53.342 19:53:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:53.342 19:53:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:53.599 19:53:19 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v5fBC7ST7e 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:53.599 19:53:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:53.599 19:53:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:53.599 19:53:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:53.599 19:53:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:53.599 19:53:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:53.599 19:53:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v5fBC7ST7e 00:24:53.599 19:53:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v5fBC7ST7e 00:24:53.857 19:53:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.v5fBC7ST7e 00:24:53.857 19:53:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v5fBC7ST7e 00:24:53.857 19:53:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v5fBC7ST7e 00:24:54.115 19:53:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:54.115 19:53:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:54.373 nvme0n1 00:24:54.373 19:53:20 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:54.373 19:53:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:54.373 19:53:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:54.373 19:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.373 19:53:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.373 19:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:54.632 19:53:20 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:54.632 19:53:20 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:54.632 19:53:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:54.889 19:53:20 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:54.889 19:53:20 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:54.889 19:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.889 19:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:54.889 19:53:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.148 19:53:20 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:55.148 19:53:20 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:55.148 19:53:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:55.148 19:53:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.148 19:53:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.148 19:53:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.148 19:53:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:55.406 19:53:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:55.406 19:53:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:55.406 19:53:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:55.665 19:53:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:55.665 19:53:21 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:55.665 19:53:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.924 19:53:21 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:55.924 19:53:21 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v5fBC7ST7e 00:24:55.924 19:53:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v5fBC7ST7e 00:24:56.183 19:53:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OK3pvB0Amu 00:24:56.183 19:53:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OK3pvB0Amu 00:24:56.441 19:53:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:56.441 19:53:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:56.699 nvme0n1 00:24:56.699 19:53:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:56.699 19:53:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:57.268 19:53:22 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:57.268 "subsystems": [ 00:24:57.268 { 00:24:57.268 "subsystem": "keyring", 00:24:57.268 "config": [ 00:24:57.268 { 00:24:57.268 "method": "keyring_file_add_key", 00:24:57.268 "params": { 00:24:57.268 "name": "key0", 00:24:57.268 "path": "/tmp/tmp.v5fBC7ST7e" 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "keyring_file_add_key", 00:24:57.268 "params": { 00:24:57.268 "name": "key1", 00:24:57.268 "path": "/tmp/tmp.OK3pvB0Amu" 00:24:57.268 } 00:24:57.268 } 00:24:57.268 ] 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "subsystem": "iobuf", 00:24:57.268 "config": [ 00:24:57.268 { 00:24:57.268 "method": "iobuf_set_options", 00:24:57.268 "params": { 00:24:57.268 "large_bufsize": 135168, 00:24:57.268 "large_pool_count": 1024, 00:24:57.268 "small_bufsize": 8192, 00:24:57.268 "small_pool_count": 8192 00:24:57.268 } 00:24:57.268 } 00:24:57.268 ] 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "subsystem": "sock", 00:24:57.268 "config": [ 00:24:57.268 { 00:24:57.268 "method": "sock_set_default_impl", 00:24:57.268 "params": { 00:24:57.268 "impl_name": "posix" 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "sock_impl_set_options", 00:24:57.268 "params": { 00:24:57.268 "enable_ktls": false, 00:24:57.268 "enable_placement_id": 0, 00:24:57.268 "enable_quickack": false, 00:24:57.268 "enable_recv_pipe": true, 00:24:57.268 "enable_zerocopy_send_client": false, 00:24:57.268 "enable_zerocopy_send_server": true, 00:24:57.268 "impl_name": "ssl", 00:24:57.268 "recv_buf_size": 4096, 00:24:57.268 "send_buf_size": 4096, 00:24:57.268 "tls_version": 0, 00:24:57.268 "zerocopy_threshold": 0 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "sock_impl_set_options", 00:24:57.268 "params": { 00:24:57.268 "enable_ktls": false, 00:24:57.268 "enable_placement_id": 0, 00:24:57.268 "enable_quickack": false, 00:24:57.268 "enable_recv_pipe": true, 00:24:57.268 "enable_zerocopy_send_client": false, 00:24:57.268 "enable_zerocopy_send_server": true, 00:24:57.268 "impl_name": "posix", 00:24:57.268 "recv_buf_size": 2097152, 00:24:57.268 "send_buf_size": 2097152, 00:24:57.268 "tls_version": 0, 00:24:57.268 "zerocopy_threshold": 0 00:24:57.268 } 00:24:57.268 } 00:24:57.268 ] 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "subsystem": "vmd", 00:24:57.268 "config": [] 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "subsystem": "accel", 00:24:57.268 "config": [ 00:24:57.268 { 00:24:57.268 "method": "accel_set_options", 00:24:57.268 "params": { 00:24:57.268 "buf_count": 2048, 00:24:57.268 "large_cache_size": 16, 00:24:57.268 "sequence_count": 2048, 00:24:57.268 "small_cache_size": 128, 00:24:57.268 "task_count": 2048 00:24:57.268 } 00:24:57.268 } 00:24:57.268 ] 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "subsystem": "bdev", 00:24:57.268 "config": [ 00:24:57.268 { 00:24:57.268 "method": "bdev_set_options", 00:24:57.268 "params": { 00:24:57.268 "bdev_auto_examine": true, 00:24:57.268 "bdev_io_cache_size": 256, 00:24:57.268 "bdev_io_pool_size": 65535, 00:24:57.268 "iobuf_large_cache_size": 16, 00:24:57.268 "iobuf_small_cache_size": 128 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "bdev_raid_set_options", 00:24:57.268 "params": { 00:24:57.268 "process_window_size_kb": 1024 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "bdev_iscsi_set_options", 00:24:57.268 "params": { 00:24:57.268 "timeout_sec": 30 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "bdev_nvme_set_options", 00:24:57.268 "params": { 00:24:57.268 "action_on_timeout": "none", 00:24:57.268 "allow_accel_sequence": false, 00:24:57.268 "arbitration_burst": 0, 00:24:57.268 "bdev_retry_count": 3, 00:24:57.268 "ctrlr_loss_timeout_sec": 0, 00:24:57.268 "delay_cmd_submit": true, 00:24:57.268 "dhchap_dhgroups": [ 00:24:57.268 "null", 00:24:57.268 "ffdhe2048", 00:24:57.268 "ffdhe3072", 00:24:57.268 "ffdhe4096", 00:24:57.268 "ffdhe6144", 00:24:57.268 "ffdhe8192" 00:24:57.268 ], 00:24:57.268 "dhchap_digests": [ 00:24:57.268 "sha256", 00:24:57.268 "sha384", 00:24:57.268 "sha512" 00:24:57.268 ], 00:24:57.268 "disable_auto_failback": false, 00:24:57.268 "fast_io_fail_timeout_sec": 0, 00:24:57.268 "generate_uuids": false, 00:24:57.268 "high_priority_weight": 0, 00:24:57.268 "io_path_stat": false, 00:24:57.268 "io_queue_requests": 512, 00:24:57.268 "keep_alive_timeout_ms": 10000, 00:24:57.268 "low_priority_weight": 0, 00:24:57.268 "medium_priority_weight": 0, 00:24:57.268 "nvme_adminq_poll_period_us": 10000, 00:24:57.268 "nvme_error_stat": false, 00:24:57.268 "nvme_ioq_poll_period_us": 0, 00:24:57.268 "rdma_cm_event_timeout_ms": 0, 00:24:57.268 "rdma_max_cq_size": 0, 00:24:57.268 "rdma_srq_size": 0, 00:24:57.268 "reconnect_delay_sec": 0, 00:24:57.268 "timeout_admin_us": 0, 00:24:57.268 "timeout_us": 0, 00:24:57.268 "transport_ack_timeout": 0, 00:24:57.268 "transport_retry_count": 4, 00:24:57.268 "transport_tos": 0 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "bdev_nvme_attach_controller", 00:24:57.268 "params": { 00:24:57.268 "adrfam": "IPv4", 00:24:57.268 "ctrlr_loss_timeout_sec": 0, 00:24:57.268 "ddgst": false, 00:24:57.268 "fast_io_fail_timeout_sec": 0, 00:24:57.268 "hdgst": false, 00:24:57.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:57.268 "name": "nvme0", 00:24:57.268 "prchk_guard": false, 00:24:57.268 "prchk_reftag": false, 00:24:57.268 "psk": "key0", 00:24:57.268 "reconnect_delay_sec": 0, 00:24:57.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.268 "traddr": "127.0.0.1", 00:24:57.268 "trsvcid": "4420", 00:24:57.268 "trtype": "TCP" 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "bdev_nvme_set_hotplug", 00:24:57.268 "params": { 00:24:57.268 "enable": false, 00:24:57.268 "period_us": 100000 00:24:57.268 } 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "method": "bdev_wait_for_examine" 00:24:57.268 } 00:24:57.268 ] 00:24:57.268 }, 00:24:57.268 { 00:24:57.268 "subsystem": "nbd", 00:24:57.268 "config": [] 00:24:57.268 } 00:24:57.268 ] 00:24:57.268 }' 00:24:57.268 19:53:22 keyring_file -- keyring/file.sh@114 -- # killprocess 100391 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100391 ']' 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100391 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100391 00:24:57.268 killing process with pid 100391 00:24:57.268 Received shutdown signal, test time was about 1.000000 seconds 00:24:57.268 00:24:57.268 Latency(us) 00:24:57.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.268 =================================================================================================================== 00:24:57.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100391' 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@967 -- # kill 100391 00:24:57.268 19:53:22 keyring_file -- common/autotest_common.sh@972 -- # wait 100391 00:24:57.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:57.268 19:53:22 keyring_file -- keyring/file.sh@117 -- # bperfpid=100868 00:24:57.269 19:53:22 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:57.269 19:53:22 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100868 /var/tmp/bperf.sock 00:24:57.269 19:53:22 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:57.269 "subsystems": [ 00:24:57.269 { 00:24:57.269 "subsystem": "keyring", 00:24:57.269 "config": [ 00:24:57.269 { 00:24:57.269 "method": "keyring_file_add_key", 00:24:57.269 "params": { 00:24:57.269 "name": "key0", 00:24:57.269 "path": "/tmp/tmp.v5fBC7ST7e" 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "keyring_file_add_key", 00:24:57.269 "params": { 00:24:57.269 "name": "key1", 00:24:57.269 "path": "/tmp/tmp.OK3pvB0Amu" 00:24:57.269 } 00:24:57.269 } 00:24:57.269 ] 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "subsystem": "iobuf", 00:24:57.269 "config": [ 00:24:57.269 { 00:24:57.269 "method": "iobuf_set_options", 00:24:57.269 "params": { 00:24:57.269 "large_bufsize": 135168, 00:24:57.269 "large_pool_count": 1024, 00:24:57.269 "small_bufsize": 8192, 00:24:57.269 "small_pool_count": 8192 00:24:57.269 } 00:24:57.269 } 00:24:57.269 ] 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "subsystem": "sock", 00:24:57.269 "config": [ 00:24:57.269 { 00:24:57.269 "method": "sock_set_default_impl", 00:24:57.269 "params": { 00:24:57.269 "impl_name": "posix" 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "sock_impl_set_options", 00:24:57.269 "params": { 00:24:57.269 "enable_ktls": false, 00:24:57.269 "enable_placement_id": 0, 00:24:57.269 "enable_quickack": false, 00:24:57.269 "enable_recv_pipe": true, 00:24:57.269 "enable_zerocopy_send_client": false, 00:24:57.269 "enable_zerocopy_send_server": true, 00:24:57.269 "impl_name": "ssl", 00:24:57.269 "recv_buf_size": 4096, 00:24:57.269 "send_buf_size": 4096, 00:24:57.269 "tls_version": 0, 00:24:57.269 "zerocopy_threshold": 0 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "sock_impl_set_options", 00:24:57.269 "params": { 00:24:57.269 "enable_ktls": false, 00:24:57.269 "enable_placement_id": 0, 00:24:57.269 "enable_quickack": false, 00:24:57.269 "enable_recv_pipe": true, 00:24:57.269 "enable_zerocopy_send_client": false, 00:24:57.269 "enable_zerocopy_send_server": true, 00:24:57.269 "impl_name": "posix", 00:24:57.269 "recv_buf_size": 2097152, 00:24:57.269 "send_buf_size": 2097152, 00:24:57.269 "tls_version": 0, 00:24:57.269 "zerocopy_threshold": 0 00:24:57.269 } 00:24:57.269 } 00:24:57.269 ] 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "subsystem": "vmd", 00:24:57.269 "config": [] 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "subsystem": "accel", 00:24:57.269 "config": [ 00:24:57.269 { 00:24:57.269 "method": "accel_set_options", 00:24:57.269 "params": { 00:24:57.269 "buf_count": 2048, 00:24:57.269 "large_cache_size": 16, 00:24:57.269 "sequence_count": 2048, 00:24:57.269 "small_cache_size": 128, 00:24:57.269 "task_count": 2048 00:24:57.269 } 00:24:57.269 } 00:24:57.269 ] 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "subsystem": "bdev", 00:24:57.269 "config": [ 00:24:57.269 { 00:24:57.269 "method": "bdev_set_options", 00:24:57.269 "params": { 00:24:57.269 "bdev_auto_examine": true, 00:24:57.269 "bdev_io_cache_size": 256, 00:24:57.269 "bdev_io_pool_size": 65535, 00:24:57.269 "iobuf_large_cache_size": 16, 00:24:57.269 "iobuf_small_cache_size": 128 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "bdev_raid_set_options", 00:24:57.269 "params": { 00:24:57.269 "process_window_size_kb": 1024 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "bdev_iscsi_set_options", 00:24:57.269 "params": { 00:24:57.269 "timeout_sec": 30 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "bdev_nvme_set_options", 00:24:57.269 "params": { 00:24:57.269 "action_on_timeout": "none", 00:24:57.269 "allow_accel_sequence": false, 00:24:57.269 "arbitration_burst": 0, 00:24:57.269 "bdev_retry_count": 3, 00:24:57.269 "ctrlr_loss_timeout_sec": 0, 00:24:57.269 "delay_cmd_submit": true, 00:24:57.269 "dhchap_dhgroups": [ 00:24:57.269 "null", 00:24:57.269 "ffdhe2048", 00:24:57.269 "ffdhe3072", 00:24:57.269 "ffdhe4096", 00:24:57.269 "ffdhe6144", 00:24:57.269 "ffdhe8192" 00:24:57.269 ], 00:24:57.269 "dhchap_digests": [ 00:24:57.269 "sha256", 00:24:57.269 "sha384", 00:24:57.269 "sha512" 00:24:57.269 ], 00:24:57.269 "disable_auto_failback": false, 00:24:57.269 "fast_io_fail_timeout_sec": 0, 00:24:57.269 "generate_uuids": false, 00:24:57.269 "high_priority_weight": 0, 00:24:57.269 "io_path_stat": false, 00:24:57.269 "io_queue_requests": 512, 00:24:57.269 "keep_alive_timeout_ms": 10000, 00:24:57.269 "low_priority_weight": 0, 00:24:57.269 "medium_priority_weight": 0, 00:24:57.269 "nvme_adminq_poll_period_us": 10000, 00:24:57.269 "nvme_error_stat": false, 00:24:57.269 "nvme_ioq_poll_period_us": 0, 00:24:57.269 "rdma_cm_event_timeout_ms": 0, 00:24:57.269 "rdma_max_cq_size": 0, 00:24:57.269 "rdma_srq_size": 0, 00:24:57.269 "reconnect_delay_sec": 0, 00:24:57.269 "timeout_admin_us": 0, 00:24:57.269 "timeout_us": 0, 00:24:57.269 "transport_ack_timeout": 0, 00:24:57.269 "transport_retry_count": 4, 00:24:57.269 "transport_tos": 0 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "bdev_nvme_attach_controller", 00:24:57.269 "params": { 00:24:57.269 "adrfam": "IPv4", 00:24:57.269 "ctrlr_loss_timeout_sec": 0, 00:24:57.269 "ddgst": false, 00:24:57.269 "fast_io_fail_timeout_sec": 0, 00:24:57.269 "hdgst": false, 00:24:57.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:57.269 "name": "nvme0", 00:24:57.269 "prchk_guard": false, 00:24:57.269 "prchk_reftag": false, 00:24:57.269 "psk": "key0", 00:24:57.269 "reconnect_delay_sec": 0, 00:24:57.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.269 "traddr": "127.0.0.1", 00:24:57.269 "trsvcid": "4420", 00:24:57.269 "trtype": "TCP" 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "bdev_nvme_set_hotplug", 00:24:57.269 "params": { 00:24:57.269 "enable": false, 00:24:57.269 "period_us": 100000 00:24:57.269 } 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "method": "bdev_wait_for_examine" 00:24:57.269 } 00:24:57.269 ] 00:24:57.269 }, 00:24:57.269 { 00:24:57.269 "subsystem": "nbd", 00:24:57.269 "config": [] 00:24:57.269 } 00:24:57.269 ] 00:24:57.269 }' 00:24:57.269 19:53:22 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100868 ']' 00:24:57.269 19:53:22 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:57.269 19:53:22 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.269 19:53:22 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:57.269 19:53:22 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.269 19:53:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:57.269 [2024-07-15 19:53:23.038882] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:24:57.269 [2024-07-15 19:53:23.039183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100868 ] 00:24:57.528 [2024-07-15 19:53:23.168666] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.528 [2024-07-15 19:53:23.267088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.786 [2024-07-15 19:53:23.447045] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:58.352 19:53:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.352 19:53:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:58.352 19:53:24 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:58.352 19:53:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.352 19:53:24 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:58.609 19:53:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:58.609 19:53:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:58.609 19:53:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:58.609 19:53:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.609 19:53:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.609 19:53:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.609 19:53:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:58.867 19:53:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:58.867 19:53:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:58.867 19:53:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:58.867 19:53:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.867 19:53:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.867 19:53:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:58.867 19:53:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.124 19:53:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:59.124 19:53:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:59.124 19:53:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:59.124 19:53:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:59.690 19:53:25 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:59.690 19:53:25 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:59.690 19:53:25 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.v5fBC7ST7e /tmp/tmp.OK3pvB0Amu 00:24:59.690 19:53:25 keyring_file -- keyring/file.sh@20 -- # killprocess 100868 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100868 ']' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100868 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100868 00:24:59.690 killing process with pid 100868 00:24:59.690 Received shutdown signal, test time was about 1.000000 seconds 00:24:59.690 00:24:59.690 Latency(us) 00:24:59.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.690 =================================================================================================================== 00:24:59.690 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100868' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@967 -- # kill 100868 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@972 -- # wait 100868 00:24:59.690 19:53:25 keyring_file -- keyring/file.sh@21 -- # killprocess 100356 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100356 ']' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100356 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100356 00:24:59.690 killing process with pid 100356 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100356' 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@967 -- # kill 100356 00:24:59.690 [2024-07-15 19:53:25.438845] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:59.690 19:53:25 keyring_file -- common/autotest_common.sh@972 -- # wait 100356 00:25:00.263 00:25:00.263 real 0m16.284s 00:25:00.263 user 0m40.603s 00:25:00.263 sys 0m3.410s 00:25:00.263 19:53:25 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.263 ************************************ 00:25:00.263 END TEST keyring_file 00:25:00.263 19:53:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:00.263 ************************************ 00:25:00.263 19:53:25 -- common/autotest_common.sh@1142 -- # return 0 00:25:00.263 19:53:25 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:25:00.263 19:53:25 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:00.263 19:53:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:00.263 19:53:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.263 19:53:25 -- common/autotest_common.sh@10 -- # set +x 00:25:00.263 ************************************ 00:25:00.263 START TEST keyring_linux 00:25:00.263 ************************************ 00:25:00.263 19:53:25 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:00.263 * Looking for test storage... 00:25:00.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:00.263 19:53:25 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:da6ed89b-4087-4382-817f-6cf647cbfbeb 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=da6ed89b-4087-4382-817f-6cf647cbfbeb 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.263 19:53:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.263 19:53:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.263 19:53:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.263 19:53:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.263 19:53:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.263 19:53:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.263 19:53:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:00.263 19:53:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:00.263 19:53:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:00.263 19:53:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:00.263 19:53:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:00.263 19:53:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:00.263 19:53:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:00.263 19:53:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:00.263 19:53:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:25:00.263 19:53:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:25:00.533 19:53:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:00.533 /tmp/:spdk-test:key0 00:25:00.533 19:53:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:00.534 19:53:26 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:00.534 19:53:26 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:00.534 19:53:26 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:25:00.534 19:53:26 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:00.534 19:53:26 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:25:00.534 19:53:26 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:25:00.534 19:53:26 keyring_linux -- nvmf/common.sh@705 -- # python - 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:00.534 /tmp/:spdk-test:key1 00:25:00.534 19:53:26 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:00.534 19:53:26 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=101017 00:25:00.534 19:53:26 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:00.534 19:53:26 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 101017 00:25:00.534 19:53:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101017 ']' 00:25:00.534 19:53:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.534 19:53:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.534 19:53:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.534 19:53:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.534 19:53:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 [2024-07-15 19:53:26.154557] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:25:00.534 [2024-07-15 19:53:26.154671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101017 ] 00:25:00.534 [2024-07-15 19:53:26.282119] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.795 [2024-07-15 19:53:26.379192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.362 19:53:27 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.362 19:53:27 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:25:01.362 19:53:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:01.362 19:53:27 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.362 19:53:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:01.362 [2024-07-15 19:53:27.110524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.362 null0 00:25:01.362 [2024-07-15 19:53:27.142424] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.362 [2024-07-15 19:53:27.142658] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:01.620 19:53:27 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.620 19:53:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:01.620 258908243 00:25:01.620 19:53:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:01.620 6197103 00:25:01.620 19:53:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=101052 00:25:01.620 19:53:27 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:01.620 19:53:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 101052 /var/tmp/bperf.sock 00:25:01.620 19:53:27 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 101052 ']' 00:25:01.620 19:53:27 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:01.620 19:53:27 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:01.620 19:53:27 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:01.620 19:53:27 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.620 19:53:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:01.620 [2024-07-15 19:53:27.229474] Starting SPDK v24.09-pre git sha1 c9ef451fa / DPDK 24.03.0 initialization... 00:25:01.620 [2024-07-15 19:53:27.229587] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101052 ] 00:25:01.620 [2024-07-15 19:53:27.370227] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.877 [2024-07-15 19:53:27.501296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.443 19:53:28 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.443 19:53:28 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:25:02.443 19:53:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:02.443 19:53:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:02.702 19:53:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:02.702 19:53:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:03.289 19:53:28 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:03.289 19:53:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:03.289 [2024-07-15 19:53:29.043797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.547 nvme0n1 00:25:03.547 19:53:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:03.547 19:53:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:03.547 19:53:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:03.547 19:53:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:03.547 19:53:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:03.547 19:53:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.806 19:53:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:03.806 19:53:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:03.806 19:53:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:03.806 19:53:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:03.806 19:53:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.806 19:53:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:03.806 19:53:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.064 19:53:29 keyring_linux -- keyring/linux.sh@25 -- # sn=258908243 00:25:04.064 19:53:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:04.064 19:53:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:04.064 19:53:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 258908243 == \2\5\8\9\0\8\2\4\3 ]] 00:25:04.064 19:53:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 258908243 00:25:04.064 19:53:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:04.064 19:53:29 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:04.064 Running I/O for 1 seconds... 00:25:05.438 00:25:05.438 Latency(us) 00:25:05.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:05.438 nvme0n1 : 1.01 12197.33 47.65 0.00 0.00 10432.59 8162.21 17158.52 00:25:05.438 =================================================================================================================== 00:25:05.438 Total : 12197.33 47.65 0.00 0.00 10432.59 8162.21 17158.52 00:25:05.438 0 00:25:05.438 19:53:30 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:05.438 19:53:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:05.438 19:53:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:05.438 19:53:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:05.438 19:53:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:05.438 19:53:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:05.438 19:53:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.438 19:53:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:05.697 19:53:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:05.697 19:53:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:05.697 19:53:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:05.697 19:53:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:05.697 19:53:31 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:25:05.697 19:53:31 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:05.697 19:53:31 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:05.697 19:53:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.697 19:53:31 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:05.697 19:53:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.697 19:53:31 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:05.697 19:53:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:05.956 [2024-07-15 19:53:31.616663] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:05.956 [2024-07-15 19:53:31.616704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2345020 (107): Transport endpoint is not connected 00:25:05.956 [2024-07-15 19:53:31.617693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2345020 (9): Bad file descriptor 00:25:05.956 [2024-07-15 19:53:31.618691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.956 [2024-07-15 19:53:31.618728] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:05.956 [2024-07-15 19:53:31.618739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.956 2024/07/15 19:53:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:05.956 request: 00:25:05.956 { 00:25:05.956 "method": "bdev_nvme_attach_controller", 00:25:05.956 "params": { 00:25:05.956 "name": "nvme0", 00:25:05.956 "trtype": "tcp", 00:25:05.956 "traddr": "127.0.0.1", 00:25:05.956 "adrfam": "ipv4", 00:25:05.956 "trsvcid": "4420", 00:25:05.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.956 "prchk_reftag": false, 00:25:05.956 "prchk_guard": false, 00:25:05.956 "hdgst": false, 00:25:05.956 "ddgst": false, 00:25:05.956 "psk": ":spdk-test:key1" 00:25:05.956 } 00:25:05.956 } 00:25:05.956 Got JSON-RPC error response 00:25:05.956 GoRPCClient: error on JSON-RPC call 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@33 -- # sn=258908243 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 258908243 00:25:05.956 1 links removed 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@33 -- # sn=6197103 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 6197103 00:25:05.956 1 links removed 00:25:05.956 19:53:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 101052 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101052 ']' 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101052 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101052 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101052' 00:25:05.956 killing process with pid 101052 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@967 -- # kill 101052 00:25:05.956 Received shutdown signal, test time was about 1.000000 seconds 00:25:05.956 00:25:05.956 Latency(us) 00:25:05.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.956 =================================================================================================================== 00:25:05.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.956 19:53:31 keyring_linux -- common/autotest_common.sh@972 -- # wait 101052 00:25:06.215 19:53:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 101017 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 101017 ']' 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 101017 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101017 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:06.215 killing process with pid 101017 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101017' 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@967 -- # kill 101017 00:25:06.215 19:53:31 keyring_linux -- common/autotest_common.sh@972 -- # wait 101017 00:25:06.782 00:25:06.782 real 0m6.462s 00:25:06.782 user 0m12.522s 00:25:06.782 sys 0m1.667s 00:25:06.782 19:53:32 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:06.782 ************************************ 00:25:06.782 END TEST keyring_linux 00:25:06.782 19:53:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:06.782 ************************************ 00:25:06.782 19:53:32 -- common/autotest_common.sh@1142 -- # return 0 00:25:06.782 19:53:32 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:06.782 19:53:32 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:06.782 19:53:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:06.782 19:53:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:06.782 19:53:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:06.782 19:53:32 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:06.782 19:53:32 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:06.782 19:53:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:06.782 19:53:32 -- common/autotest_common.sh@10 -- # set +x 00:25:06.782 19:53:32 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:06.782 19:53:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:06.782 19:53:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:06.782 19:53:32 -- common/autotest_common.sh@10 -- # set +x 00:25:08.682 INFO: APP EXITING 00:25:08.682 INFO: killing all VMs 00:25:08.682 INFO: killing vhost app 00:25:08.682 INFO: EXIT DONE 00:25:08.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:08.940 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:09.198 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:09.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:09.764 Cleaning 00:25:09.764 Removing: /var/run/dpdk/spdk0/config 00:25:09.764 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:09.764 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:09.764 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:09.764 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:09.764 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:09.764 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:09.764 Removing: /var/run/dpdk/spdk1/config 00:25:09.764 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:09.764 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:09.764 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:09.764 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:09.764 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:09.764 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:09.764 Removing: /var/run/dpdk/spdk2/config 00:25:09.764 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:09.764 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:09.764 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:09.764 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:09.764 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:09.764 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:09.764 Removing: /var/run/dpdk/spdk3/config 00:25:09.764 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:09.764 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:09.764 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:09.764 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:09.764 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:09.764 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:09.764 Removing: /var/run/dpdk/spdk4/config 00:25:09.764 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:09.764 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:09.764 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:09.764 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:09.764 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:09.764 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:09.764 Removing: /dev/shm/nvmf_trace.0 00:25:09.764 Removing: /dev/shm/spdk_tgt_trace.pid60529 00:25:09.764 Removing: /var/run/dpdk/spdk0 00:25:10.021 Removing: /var/run/dpdk/spdk1 00:25:10.021 Removing: /var/run/dpdk/spdk2 00:25:10.021 Removing: /var/run/dpdk/spdk3 00:25:10.021 Removing: /var/run/dpdk/spdk4 00:25:10.021 Removing: /var/run/dpdk/spdk_pid100356 00:25:10.021 Removing: /var/run/dpdk/spdk_pid100391 00:25:10.021 Removing: /var/run/dpdk/spdk_pid100868 00:25:10.021 Removing: /var/run/dpdk/spdk_pid101017 00:25:10.021 Removing: /var/run/dpdk/spdk_pid101052 00:25:10.021 Removing: /var/run/dpdk/spdk_pid60384 00:25:10.021 Removing: /var/run/dpdk/spdk_pid60529 00:25:10.021 Removing: /var/run/dpdk/spdk_pid60790 00:25:10.021 Removing: /var/run/dpdk/spdk_pid60888 00:25:10.021 Removing: /var/run/dpdk/spdk_pid60922 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61037 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61067 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61185 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61465 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61641 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61718 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61810 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61899 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61938 00:25:10.021 Removing: /var/run/dpdk/spdk_pid61973 00:25:10.021 Removing: /var/run/dpdk/spdk_pid62035 00:25:10.021 Removing: /var/run/dpdk/spdk_pid62152 00:25:10.021 Removing: /var/run/dpdk/spdk_pid62778 00:25:10.021 Removing: /var/run/dpdk/spdk_pid62842 00:25:10.021 Removing: /var/run/dpdk/spdk_pid62911 00:25:10.021 Removing: /var/run/dpdk/spdk_pid62945 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63024 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63054 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63131 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63159 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63210 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63240 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63292 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63316 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63468 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63503 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63574 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63648 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63672 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63731 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63765 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63800 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63834 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63869 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63909 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63938 00:25:10.021 Removing: /var/run/dpdk/spdk_pid63978 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64007 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64047 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64076 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64116 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64147 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64187 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64216 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64256 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64286 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64331 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64369 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64403 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64439 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64503 00:25:10.021 Removing: /var/run/dpdk/spdk_pid64614 00:25:10.021 Removing: /var/run/dpdk/spdk_pid65030 00:25:10.021 Removing: /var/run/dpdk/spdk_pid68372 00:25:10.021 Removing: /var/run/dpdk/spdk_pid68716 00:25:10.021 Removing: /var/run/dpdk/spdk_pid71143 00:25:10.021 Removing: /var/run/dpdk/spdk_pid71519 00:25:10.021 Removing: /var/run/dpdk/spdk_pid71781 00:25:10.021 Removing: /var/run/dpdk/spdk_pid71827 00:25:10.021 Removing: /var/run/dpdk/spdk_pid72448 00:25:10.021 Removing: /var/run/dpdk/spdk_pid72884 00:25:10.021 Removing: /var/run/dpdk/spdk_pid72934 00:25:10.021 Removing: /var/run/dpdk/spdk_pid73292 00:25:10.021 Removing: /var/run/dpdk/spdk_pid73826 00:25:10.021 Removing: /var/run/dpdk/spdk_pid74275 00:25:10.021 Removing: /var/run/dpdk/spdk_pid75242 00:25:10.021 Removing: /var/run/dpdk/spdk_pid76226 00:25:10.021 Removing: /var/run/dpdk/spdk_pid76340 00:25:10.021 Removing: /var/run/dpdk/spdk_pid76413 00:25:10.021 Removing: /var/run/dpdk/spdk_pid77878 00:25:10.021 Removing: /var/run/dpdk/spdk_pid78103 00:25:10.021 Removing: /var/run/dpdk/spdk_pid83391 00:25:10.021 Removing: /var/run/dpdk/spdk_pid83832 00:25:10.021 Removing: /var/run/dpdk/spdk_pid83941 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84108 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84148 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84198 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84239 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84404 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84565 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84829 00:25:10.280 Removing: /var/run/dpdk/spdk_pid84946 00:25:10.280 Removing: /var/run/dpdk/spdk_pid85195 00:25:10.280 Removing: /var/run/dpdk/spdk_pid85326 00:25:10.280 Removing: /var/run/dpdk/spdk_pid85456 00:25:10.280 Removing: /var/run/dpdk/spdk_pid85805 00:25:10.280 Removing: /var/run/dpdk/spdk_pid86238 00:25:10.280 Removing: /var/run/dpdk/spdk_pid86542 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87043 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87046 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87386 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87400 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87414 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87445 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87451 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87812 00:25:10.280 Removing: /var/run/dpdk/spdk_pid87855 00:25:10.280 Removing: /var/run/dpdk/spdk_pid88200 00:25:10.280 Removing: /var/run/dpdk/spdk_pid88446 00:25:10.280 Removing: /var/run/dpdk/spdk_pid88939 00:25:10.280 Removing: /var/run/dpdk/spdk_pid89526 00:25:10.280 Removing: /var/run/dpdk/spdk_pid90887 00:25:10.280 Removing: /var/run/dpdk/spdk_pid91480 00:25:10.280 Removing: /var/run/dpdk/spdk_pid91486 00:25:10.280 Removing: /var/run/dpdk/spdk_pid93411 00:25:10.280 Removing: /var/run/dpdk/spdk_pid93501 00:25:10.280 Removing: /var/run/dpdk/spdk_pid93596 00:25:10.280 Removing: /var/run/dpdk/spdk_pid93682 00:25:10.280 Removing: /var/run/dpdk/spdk_pid93839 00:25:10.280 Removing: /var/run/dpdk/spdk_pid93924 00:25:10.280 Removing: /var/run/dpdk/spdk_pid94020 00:25:10.280 Removing: /var/run/dpdk/spdk_pid94106 00:25:10.280 Removing: /var/run/dpdk/spdk_pid94447 00:25:10.280 Removing: /var/run/dpdk/spdk_pid95134 00:25:10.280 Removing: /var/run/dpdk/spdk_pid96497 00:25:10.280 Removing: /var/run/dpdk/spdk_pid96697 00:25:10.280 Removing: /var/run/dpdk/spdk_pid96989 00:25:10.280 Removing: /var/run/dpdk/spdk_pid97287 00:25:10.280 Removing: /var/run/dpdk/spdk_pid97833 00:25:10.280 Removing: /var/run/dpdk/spdk_pid97842 00:25:10.280 Removing: /var/run/dpdk/spdk_pid98200 00:25:10.280 Removing: /var/run/dpdk/spdk_pid98359 00:25:10.280 Removing: /var/run/dpdk/spdk_pid98516 00:25:10.280 Removing: /var/run/dpdk/spdk_pid98609 00:25:10.280 Removing: /var/run/dpdk/spdk_pid98768 00:25:10.280 Removing: /var/run/dpdk/spdk_pid98877 00:25:10.280 Removing: /var/run/dpdk/spdk_pid99544 00:25:10.280 Removing: /var/run/dpdk/spdk_pid99578 00:25:10.280 Removing: /var/run/dpdk/spdk_pid99614 00:25:10.280 Removing: /var/run/dpdk/spdk_pid99869 00:25:10.280 Removing: /var/run/dpdk/spdk_pid99905 00:25:10.280 Removing: /var/run/dpdk/spdk_pid99935 00:25:10.280 Clean 00:25:10.280 19:53:36 -- common/autotest_common.sh@1451 -- # return 0 00:25:10.280 19:53:36 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:10.280 19:53:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.280 19:53:36 -- common/autotest_common.sh@10 -- # set +x 00:25:10.539 19:53:36 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:10.539 19:53:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.539 19:53:36 -- common/autotest_common.sh@10 -- # set +x 00:25:10.539 19:53:36 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:10.539 19:53:36 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:10.539 19:53:36 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:10.539 19:53:36 -- spdk/autotest.sh@391 -- # hash lcov 00:25:10.539 19:53:36 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:10.539 19:53:36 -- spdk/autotest.sh@393 -- # hostname 00:25:10.539 19:53:36 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:10.798 geninfo: WARNING: invalid characters removed from testname! 00:25:32.712 19:53:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:35.990 19:54:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:38.513 19:54:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:41.063 19:54:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:43.589 19:54:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:46.118 19:54:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:48.647 19:54:14 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:48.647 19:54:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.647 19:54:14 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:48.647 19:54:14 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.647 19:54:14 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.647 19:54:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.647 19:54:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.647 19:54:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.647 19:54:14 -- paths/export.sh@5 -- $ export PATH 00:25:48.647 19:54:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.647 19:54:14 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:48.647 19:54:14 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:48.647 19:54:14 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721073254.XXXXXX 00:25:48.647 19:54:14 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721073254.dlzeQF 00:25:48.647 19:54:14 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:48.647 19:54:14 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:48.648 19:54:14 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:48.648 19:54:14 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:48.648 19:54:14 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:48.648 19:54:14 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:48.648 19:54:14 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:48.648 19:54:14 -- common/autotest_common.sh@10 -- $ set +x 00:25:48.648 19:54:14 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:25:48.648 19:54:14 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:48.648 19:54:14 -- pm/common@17 -- $ local monitor 00:25:48.648 19:54:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:48.648 19:54:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:48.648 19:54:14 -- pm/common@25 -- $ sleep 1 00:25:48.648 19:54:14 -- pm/common@21 -- $ date +%s 00:25:48.648 19:54:14 -- pm/common@21 -- $ date +%s 00:25:48.648 19:54:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721073254 00:25:48.648 19:54:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721073254 00:25:48.648 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721073254_collect-vmstat.pm.log 00:25:48.648 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721073254_collect-cpu-load.pm.log 00:25:49.600 19:54:15 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:49.600 19:54:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:49.600 19:54:15 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:49.600 19:54:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:49.600 19:54:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:49.600 19:54:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:49.600 19:54:15 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:49.600 19:54:15 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:49.600 19:54:15 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:49.600 19:54:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:49.600 19:54:15 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:49.600 19:54:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:49.600 19:54:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:49.600 19:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:49.600 19:54:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:49.600 19:54:15 -- pm/common@44 -- $ pid=102777 00:25:49.600 19:54:15 -- pm/common@50 -- $ kill -TERM 102777 00:25:49.600 19:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:49.600 19:54:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:49.600 19:54:15 -- pm/common@44 -- $ pid=102779 00:25:49.600 19:54:15 -- pm/common@50 -- $ kill -TERM 102779 00:25:49.600 + [[ -n 5150 ]] 00:25:49.600 + sudo kill 5150 00:25:49.608 [Pipeline] } 00:25:49.626 [Pipeline] // timeout 00:25:49.632 [Pipeline] } 00:25:49.648 [Pipeline] // stage 00:25:49.654 [Pipeline] } 00:25:49.672 [Pipeline] // catchError 00:25:49.682 [Pipeline] stage 00:25:49.684 [Pipeline] { (Stop VM) 00:25:49.699 [Pipeline] sh 00:25:49.975 + vagrant halt 00:25:53.267 ==> default: Halting domain... 00:25:59.839 [Pipeline] sh 00:26:00.115 + vagrant destroy -f 00:26:04.298 ==> default: Removing domain... 00:26:04.310 [Pipeline] sh 00:26:04.588 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:26:04.599 [Pipeline] } 00:26:04.619 [Pipeline] // stage 00:26:04.626 [Pipeline] } 00:26:04.647 [Pipeline] // dir 00:26:04.654 [Pipeline] } 00:26:04.672 [Pipeline] // wrap 00:26:04.680 [Pipeline] } 00:26:04.696 [Pipeline] // catchError 00:26:04.707 [Pipeline] stage 00:26:04.710 [Pipeline] { (Epilogue) 00:26:04.749 [Pipeline] sh 00:26:05.030 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:11.594 [Pipeline] catchError 00:26:11.596 [Pipeline] { 00:26:11.610 [Pipeline] sh 00:26:11.882 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:12.140 Artifacts sizes are good 00:26:12.149 [Pipeline] } 00:26:12.166 [Pipeline] // catchError 00:26:12.176 [Pipeline] archiveArtifacts 00:26:12.183 Archiving artifacts 00:26:12.357 [Pipeline] cleanWs 00:26:12.366 [WS-CLEANUP] Deleting project workspace... 00:26:12.366 [WS-CLEANUP] Deferred wipeout is used... 00:26:12.372 [WS-CLEANUP] done 00:26:12.374 [Pipeline] } 00:26:12.387 [Pipeline] // stage 00:26:12.391 [Pipeline] } 00:26:12.404 [Pipeline] // node 00:26:12.408 [Pipeline] End of Pipeline 00:26:12.443 Finished: SUCCESS